The Use and Abuse of Limits of Detection in Environmental Analytical Chemistry
Brown, Richard J. C.
2008-01-01
The limit of detection (LoD) serves as an important method performance measure that is useful for the comparison of measurement techniques and the assessment of likely signal to noise performance, especially in environmental analytical chemistry. However, the LoD is only truly related to the precision characteristics of the analytical instrument employed for the analysis and the content of analyte in the blank sample. This article discusses how other criteria, such as sampling volume, can serve to distort the quoted LoD artificially and make comparison between various analytical methods inequitable. In order to compare LoDs between methods properly, it is necessary to state clearly all of the input parameters relating to the measurements that have been used in the calculation of the LoD. Additionally, the article discusses that the use of LoDs in contexts other than the comparison of the attributes of analytical methods, in particular when reporting analytical results, may be confusing, less informative than quoting the actual result with an accompanying statement of uncertainty, and may act to bias descriptive statistics. PMID:18690384
COMPARISON OF ANALYTICAL METHODS FOR THE MEASUREMENT OF NON-VIABLE BIOLOGICAL PM
The paper describes a preliminary research effort to develop a methodology for the measurement of non-viable biologically based particulate matter (PM), analyzing for mold, dust mite, and ragweed antigens and endotoxins. Using a comparison of analytical methods, the research obj...
ERIC Educational Resources Information Center
Barrows, Russell D.
2007-01-01
A one-way ANOVA experiment is performed to determine whether or not the three standardization methods are statistically different in determining the concentration of the three paraffin analytes. The laboratory exercise asks students to combine the three methods in a single analytical procedure of their own design to determine the concentration of…
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.
Matschat, Ralf; Hassler, Jürgen; Traub, Heike; Dette, Angelika
2005-12-01
The members of the committee NMP 264 "Chemical analysis of non-oxidic raw and basic materials" of the German Standards Institute (DIN) have organized two interlaboratory comparisons for multielement determination of trace elements in silicon carbide (SiC) powders via direct solid sampling methods. One of the interlaboratory comparisons was based on the application of inductively coupled plasma optical emission spectrometry with electrothermal vaporization (ETV ICP OES), and the other on the application of optical emission spectrometry with direct current arc (DC arc OES). The interlaboratory comparisons were organized and performed in the framework of the development of two standards related to "the determination of mass fractions of metallic impurities in powders and grain sizes of ceramic raw and basic materials" by both methods. SiC powders were used as typical examples of this category of material. The aim of the interlaboratory comparisons was to determine the repeatability and reproducibility of both analytical methods to be standardized. This was an important contribution to the practical applicability of both draft standards. Eight laboratories participated in the interlaboratory comparison with ETV ICP OES and nine in the interlaboratory comparison with DC arc OES. Ten analytes were investigated by ETV ICP OES and eleven by DC arc OES. Six different SiC powders were used for the calibration. The mass fractions of their relevant trace elements were determined after wet chemical digestion. All participants followed the analytical requirements described in the draft standards. In the calculation process, three of the calibration materials were used successively as analytical samples. This was managed in the following manner: the material that had just been used as the analytical sample was excluded from the calibration, so the five other materials were used to establish the calibration plot. The results from the interlaboratory comparisons were summarized and used to determine the repeatability and the reproducibility (expressed as standard deviations) of both methods. The calculation was carried out according to the related standard. The results are specified and discussed in this paper, as are the optimized analytical conditions determined and used by the authors of this paper. For both methods, the repeatability relative standard deviations were <25%, usually ~10%, and the reproducibility relative standard deviations were <35%, usually ~15%. These results were regarded as satifactory for both methods intended for rapid analysis of materials for which decomposition is difficult and time-consuming. Also described are some results from an interlaboratory comparison used to certify one of the materials that had been previously used for validation in both interlaboratory comparisons. Thirty laboratories (from eight countries) participated in this interlaboratory comparison for certification. As examples, accepted results are shown from laboratories that used ETV ICP OES or DC arc OES and had performed calibrations by using solutions or oxides, respectively. The certified mass fractions of the certified reference materials were also compared with the mass fractions determined in the interlaboratory comparisons performed within the framework of method standardization. Good agreement was found for most of the analytes.
NASA Astrophysics Data System (ADS)
Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah
2017-08-01
Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohimer, J.P.
The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.
Werner, S.L.; Johnson, S.M.
1994-01-01
As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.
ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT
The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods
ERIC Educational Resources Information Center
Zhang, Ying
2011-01-01
Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…
NASA Astrophysics Data System (ADS)
Medvedev, Nickolay S.; Shaverina, Anastasiya V.; Tsygankova, Alphiya R.; Saprykin, Anatoly I.
2018-04-01
The paper presents а comparison of analytical performances of inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma atomic emission spectrometry (ICP-AES) for trace analysis of high purity bismuth and bismuth oxide. Matrix effects in the ICP-MS and ICP-AES methods were studied as a function of Bi concentration, ICP power and nebulizer flow rate. For ICP-MS the strong dependence of the matrix effects versus the atomic mass of analytes was observed. For ICP-AES the minimal matrix effects were achieved for spectral lines of analytes with low excitation potentials. The optimum degree of sample dilution providing minimum values of the limits of detection (LODs) was chosen. Both methods let us to reach LODs from n·10-7 to n·10-4 wt% for more than 50 trace elements. For most elements the LODs of ICP-MS were lower in comparison to ICP-AES. Validation of accuracy of the developed techniques was performed by "added-found" experiments and by comparison of the results of ICP-MS and ICP-AES analysis of high-purity bismuth oxide.
Measuring solids concentration in stormwater runoff: comparison of analytical methods.
Clark, Shirley E; Siu, Christina Y S
2008-01-15
Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.
2007-01-01
Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.
2006-01-01
Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, W.W.; Sullivan, H.H.
Electroless nicke-plate characteristics are substantially influenced by percent phosphorous concentrations. Available ASTM analytical methods are designed for phosphorous concentrations of less than one percent compared to the 4.0 to 20.0% concentrations common in electroless nickel plate. A variety of analytical adaptations are applied through the industry resulting in poor data continuity. This paper presents a statistical comparison of five analytical methods and recommends accurate and precise procedures for use in percent phosphorous determinations in electroless nickel plate. 2 figures, 1 table.
USDA-ARS?s Scientific Manuscript database
A comparison study of analytical methods including HPLC, UPLC and HPTLC are presented in this paper for the determination of major alkaloid and triterpene saponins from the roots of Caulophyllum thalictroides (L.) Michx. (blue cohosh) and dietary supplements claiming to contain blue cohosh. The meth...
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
Rodrigues, André L; Göcke, Yvonne; Bolten, Christoph; Brock, Nelson L; Dickschat, Jeroen S; Wittmann, Christoph
2012-04-01
Violacein and deoxyviolacein display a broad range of interesting biological properties but their production is rarely distinguished due to the lack of suitable analytical methods. An HPLC method has been developed for the separation and quantification of violacein and deoxyviolacein and can determine the content of both molecules in microbial cultures. A comparison of different production microorganisms, including recombinant Escherichia coli and the natural producer Janthinobacterium lividum, revealed that the formation of violacein and deoxyviolacein is strain-specific but showed significant variation during growth although the ratio between the two compounds remained constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung
2016-04-20
Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less
Marcelo Ardon; Catherine M. Pringle; Susan L. Eggert
2009-01-01
Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to...
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
Comparison of modal superposition methods for the analytical solution to moving load problems.
DOT National Transportation Integrated Search
1994-01-01
The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...
Selecting Evaluation Comparison Groups: A Cluster Analytic Approach.
ERIC Educational Resources Information Center
Davis, Todd Mclin; McLean, James E.
A persistent problem in the evaluation of field-based projects is the lack of no-treatment comparison groups. Frequently, potential comparison groups are confounded by socioeconomic, racial, or other factors. Among the possible methods for dealing with this problem are various matching procedures, but they are cumbersome to use with multiple…
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Goodall, John R
2014-01-01
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit processmore » recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.« less
Creep-rupture of polymer-matrix composites. [graphite-epoxy laminates
NASA Technical Reports Server (NTRS)
Brinson, H. F.; Griffith, W. I.; Morris, D. H.
1980-01-01
An accelerated characterization method for resin matrix composites is reviewed. Methods for determining modulus and strength master curves are given. Creep rupture analytical models are discussed as applied to polymers and polymer matrix composites. Comparisons between creep rupture experiments and analytical models are presented. The time dependent creep rupture process in graphite epoxy laminates is examined as a function of temperature and stress level.
Bada, J.L.; Hoopes, E.; Darling, D.; Dungworth, G.; Kessels, H.J.; Kvenvolden, K.A.; Blunt, D.J.
1979-01-01
Enantiomeric measurements for aspartic acid, glutamic acid, and alanine in twenty-one different fossil bone samples have been carried out by three different laboratories using different analytical methods. These inter-laboratory comparisons demonstrate that D/L aspartic acid measurements are highly reproducible, whereas the enantiomeric measurements for the other amino acids show a wide variation between the three laboratories. At present, aspartic acid measurements are the most suitable for racemization dating of bone because of their superior analytical precision. ?? 1979.
Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data
NASA Technical Reports Server (NTRS)
Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.
2004-01-01
A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
NASA Astrophysics Data System (ADS)
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Churchwell, Mona I; Twaddle, Nathan C; Meeker, Larry R; Doerge, Daniel R
2005-10-25
Recent technological advances have made available reverse phase chromatographic media with a 1.7 microm particle size along with a liquid handling system that can operate such columns at much higher pressures. This technology, termed ultra performance liquid chromatography (UPLC), offers significant theoretical advantages in resolution, speed, and sensitivity for analytical determinations, particularly when coupled with mass spectrometers capable of high-speed acquisitions. This paper explores the differences in LC-MS performance by conducting a side-by-side comparison of UPLC for several methods previously optimized for HPLC-based separation and quantification of multiple analytes with maximum throughput. In general, UPLC produced significant improvements in method sensitivity, speed, and resolution. Sensitivity increases with UPLC, which were found to be analyte-dependent, were as large as 10-fold and improvements in method speed were as large as 5-fold under conditions of comparable peak separations. Improvements in chromatographic resolution with UPLC were apparent from generally narrower peak widths and from a separation of diastereomers not possible using HPLC. Overall, the improvements in LC-MS method sensitivity, speed, and resolution provided by UPLC show that further advances can be made in analytical methodology to add significant value to hypothesis-driven research.
[Detection of rubella virus RNA in clinical material by real time polymerase chain reaction method].
Domonova, É A; Shipulina, O Iu; Kuevda, D A; Larichev, V F; Safonova, A P; Burchik, M A; Butenko, A M; Shipulin, G A
2012-01-01
Development of a reagent kit for detection of rubella virus RNA in clinical material by PCR-RT. During development and determination of analytical specificity and sensitivity DNA and RNA of 33 different microorganisms including 4 rubella strains were used. Comparison of analytical sensitivity of virological and molecular-biological methods was performed by using rubella virus strains Wistar RA 27/3, M-33, "Orlov", Judith. Evaluation of diagnostic informativity of rubella virus RNAisolation in various clinical material by PCR-RT method was performed in comparison with determination of virus specific serum antibodies by enzyme immunoassay. A reagent kit for the detection of rubella virus RNA in clinical material by PCR-RT was developed. Analytical specificity was 100%, analytical sensitivity - 400 virus RNA copies per ml. Analytical sensitivity of the developed technique exceeds analytical sensitivity of the Vero E6 cell culture infection method in studies of rubella virus strains Wistar RA 27/3 and "Orlov" by 11g and 31g, and for M-33 and Judith strains is analogous. Diagnostic specificity is 100%. Diagnostic specificity for testing samples obtained within 5 days of rash onset: for peripheral blood sera - 20.9%, saliva - 92.5%, nasopharyngeal swabs - 70.1%, saliva and nasopharyngeal swabs - 97%. Positive and negative predictive values of the results were shown depending on the type of clinical material tested. Application of reagent kit will allow to increase rubella diagnostics effectiveness at the early stages of infectious process development, timely and qualitatively perform differential diagnostics of exanthema diseases, support tactics of anti-epidemic regime.
Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A
2017-12-19
As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.
Marcelo Ard& #243; n; Catherine M. Pringle; Susan L. Eggert
2009-01-01
Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to measure leaf chemistry. We used standardized analytical techniques to measure chemistry and breakdown rate of leaves from common riparian tree species at 2 sites, 1...
NASA Technical Reports Server (NTRS)
Zeleznik, Frank J.; Gordon, Sanford
1960-01-01
The Brinkley, Huff, and White methods for chemical-equilibrium calculations were modified and extended in order to permit an analytical comparison. The extended forms of these methods permit condensed species as reaction products, include temperature as a variable in the iteration, and permit arbitrary estimates for the variables. It is analytically shown that the three extended methods can be placed in a form that is independent of components. In this form the Brinkley iteration is identical computationally to the White method, while the modified Huff method differs only'slightly from these two. The convergence rates of the modified Brinkley and White methods are identical; and, further, all three methods are guaranteed to converge and will ultimately converge quadratically. It is concluded that no one of the three methods offers any significant computational advantages over the other two.
The importance of quality control in validating concentrations ...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. This paper compares the method performance of six analytical methods used to measure 174 emer
Multi-center evaluation of analytical performance of the Beckman Coulter AU5822 chemistry analyzer.
Zimmerman, M K; Friesen, L R; Nice, A; Vollmer, P A; Dockery, E A; Rankin, J D; Zmuda, K; Wong, S H
2015-09-01
Our three academic institutions, Indiana University, Northwestern Memorial Hospital, and Wake Forest, were among the first in the United States to implement the Beckman Coulter AU5822 series chemistry analyzers. We undertook this post-hoc multi-center study by merging our data to determine performance characteristics and the impact of methodology changes on analyte measurement. We independently completed performance validation studies including precision, linearity/analytical measurement range, method comparison, and reference range verification. Complete data sets were available from at least one institution for 66 analytes with the following groups: 51 from all three institutions, and 15 from 1 or 2 institutions for a total sample size of 12,064. Precision was similar among institutions. Coefficients of variation (CV) were <10% for 97%. Analytes with CVs >10% included direct bilirubin and digoxin. All analytes exhibited linearity over the analytical measurement range. Method comparison data showed slopes between 0.900-1.100 for 87.9% of the analytes. Slopes for amylase, tobramycin and urine amylase were <0.8; the slope for lipase was >1.5, due to known methodology or standardization differences. Consequently, reference ranges of amylase, urine amylase and lipase required only minor or no modification. The four AU5822 analyzers independently evaluated at three sites showed consistent precision, linearity, and correlation results. Since installations, the test results had been well received by clinicians from all three institutions. Copyright © 2015. Published by Elsevier Inc.
Pythagorean fuzzy analytic hierarchy process to multi-criteria decision making
NASA Astrophysics Data System (ADS)
Mohd, Wan Rosanisah Wan; Abdullah, Lazim
2017-11-01
A numerous approaches have been proposed in the literature to determine the criteria of weight. The weight of criteria is very significant in the process of decision making. One of the outstanding approaches that used to determine weight of criteria is analytic hierarchy process (AHP). This method involves decision makers (DMs) to evaluate the decision to form the pair-wise comparison between criteria and alternatives. In classical AHP, the linguistic variable of pairwise comparison is presented in terms of crisp value. However, this method is not appropriate to present the real situation of the problems because it involved the uncertainty in linguistic judgment. For this reason, AHP has been extended by incorporating the Pythagorean fuzzy sets. In addition, no one has found in the literature proposed how to determine the weight of criteria using AHP under Pythagorean fuzzy sets. In order to solve the MCDM problem, the Pythagorean fuzzy analytic hierarchy process is proposed to determine the criteria weight of the evaluation criteria. Using the linguistic variables, pairwise comparison for evaluation criteria are made to the weights of criteria using Pythagorean fuzzy numbers (PFNs). The proposed method is implemented in the evaluation problem in order to demonstrate its applicability. This study shows that the proposed method provides us with a useful way and a new direction in solving MCDM problems with Pythagorean fuzzy context.
NASA Astrophysics Data System (ADS)
Maschio, Lorenzo; Kirtman, Bernard; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto
2013-10-01
In this work, we validate a new, fully analytical method for calculating Raman intensities of periodic systems, developed and presented in Paper I [L. Maschio, B. Kirtman, M. Rérat, R. Orlando, and R. Dovesi, J. Chem. Phys. 139, 164101 (2013)]. Our validation of this method and its implementation in the CRYSTAL code is done through several internal checks as well as comparison with experiment. The internal checks include consistency of results when increasing the number of periodic directions (from 0D to 1D, 2D, 3D), comparison with numerical differentiation, and a test of the sum rule for derivatives of the polarizability tensor. The choice of basis set as well as the Hamiltonian is also studied. Simulated Raman spectra of α-quartz and of the UiO-66 Metal-Organic Framework are compared with the experimental data.
ERIC Educational Resources Information Center
Johnston, Rhona S.; McGeown, Sarah; Watson, Joyce E.
2012-01-01
A comparison was made of 10-year-old boys and girls who had learnt to read by analytic or synthetic phonics methods as part of their early literacy programmes. The boys taught by the synthetic phonics method had better word reading than the girls in their classes, and their spelling and reading comprehension was as good. In contrast, with analytic…
Vandenabeele-Trambouze, O; Claeys-Bruno, M; Dobrijevic, M; Rodier, C; Borruat, G; Commeyras, A; Garrelly, L
2005-02-01
The need for criteria to compare different analytical methods for measuring extraterrestrial organic matter at ultra-trace levels in relatively small and unique samples (e.g., fragments of meteorites, micrometeorites, planetary samples) is discussed. We emphasize the need to standardize the description of future analyses, and take the first step toward a proposed international laboratory network for performance testing.
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics
Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L
2012-10-01
Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Comparison of analytical methods for calculation of wind loads
NASA Technical Reports Server (NTRS)
Minderman, Donald J.; Schultz, Larry L.
1989-01-01
The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.
Comparisons of Exploratory and Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Daniel, Larry G.
Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…
Limited Rationality and Its Quantification Through the Interval Number Judgments With Permutations.
Liu, Fang; Pedrycz, Witold; Zhang, Wei-Guo
2017-12-01
The relative importance of alternatives expressed in terms of interval numbers in the fuzzy analytic hierarchy process aims to capture the uncertainty experienced by decision makers (DMs) when making a series of comparisons. Under the assumption of full rationality, the judgements of DMs in the typical analytic hierarchy process could be consistent. However, since the uncertainty in articulating the opinions of DMs is unavoidable, the interval number judgements are associated with the limited rationality. In this paper, we investigate the concept of limited rationality by introducing interval multiplicative reciprocal comparison matrices. By analyzing the consistency of interval multiplicative reciprocal comparison matrices, it is observed that the interval number judgements are inconsistent. By considering the permutations of alternatives, the concepts of approximation-consistency and acceptable approximation-consistency of interval multiplicative reciprocal comparison matrices are proposed. The exchange method is designed to generate all the permutations. A novel method of determining the interval weight vector is proposed under the consideration of randomness in comparing alternatives, and a vector of interval weights is determined. A new algorithm of solving decision making problems with interval multiplicative reciprocal preference relations is provided. Two numerical examples are carried out to illustrate the proposed approach and offer a comparison with the methods available in the literature.
Coding and Commonality Analysis: Non-ANOVA Methods for Analyzing Data from Experiments.
ERIC Educational Resources Information Center
Thompson, Bruce
The advantages and disadvantages of three analytic methods used to analyze experimental data in educational research are discussed. The same hypothetical data set is used with all methods for a direct comparison. The Analysis of Variance (ANOVA) method and its several analogs are collectively labeled OVA methods and are evaluated. Regression…
Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene
2018-05-16
Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.
NASA Astrophysics Data System (ADS)
Haritan, Idan; Moiseyev, Nimrod
2017-07-01
Resonances play a major role in a large variety of fields in physics and chemistry. Accordingly, there is a growing interest in methods designed to calculate them. Recently, Landau et al. proposed a new approach to analytically dilate a single eigenvalue from the stabilization graph into the complex plane. This approach, termed Resonances Via Padé (RVP), utilizes the Padé approximant and is based on a unique analysis of the stabilization graph. Yet, analytic continuation of eigenvalues from the stabilization graph into the complex plane is not a new idea. In 1975, Jordan suggested an analytic continuation method based on the branch point structure of the stabilization graph. The method was later modified by McCurdy and McNutt, and it is still being used today. We refer to this method as the Truncated Characteristic Polynomial (TCP) method. In this manuscript, we perform an in-depth comparison between the RVP and the TCP methods. We demonstrate that while both methods are important and complementary, the advantage of one method over the other is problem-dependent. Illustrative examples are provided in the manuscript.
COMPARISON OF METHODS TO DETERMINE OXYGEN DEMAND FOR BIOREMEDIATION OF A FUEL CONTAMINATED AQUIFER
Four analytical methods were compared for estimating concentrations of fuel contaminants in subsurface core samples. The methods were total organic carbon, chemical oxygen demand, oil and grease, and a solvent extraction of fuel hydrocarbons combined with a gas chromatographic te...
Watts, R R; Langone, J J; Knight, G J; Lewtas, J
1990-01-01
A two-day technical workshop was convened November 10-11, 1986, to discuss analytical approaches for determining trace amounts of cotinine in human body fluids resulting from passive exposure to environmental tobacco smoke (ETS). The workshop, jointly sponsored by the U.S. Environmental Protection Agency and Centers for Disease Control, was attended by scientists with expertise in cotinine analytical methodology and/or conduct of human monitoring studies related to ETS. The workshop format included technical presentations, separate panel discussions on chromatography and immunoassay analytical approaches, and group discussions related to the quality assurance/quality control aspects of future monitoring programs. This report presents a consensus of opinion on general issues before the workshop panel participants and also a detailed comparison of several analytical approaches being used by the various represented laboratories. The salient features of the chromatography and immunoassay analytical methods are discussed separately. PMID:2190812
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
A comparison of experiment and theory for sound propagation in variable area ducts
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, C. J.
1980-01-01
An experimental and analytical program has been carried out to evaluate sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients. The analytical program employs a computer code based on the method of multiple scales to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. Detailed comparisons between the analytical predictions and the experimental measurements have been made. The circumferential variations of pressure amplitudes and phases at several axial positions have been examined in straight and variable area ducts, with hard walls and lined sections, and with and without a mean flow. Reasonable agreement between the theoretical and experimental results has been found.
Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Keller, J.; Wallen, R.
2015-02-01
Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
Agopian, A J; Evans, Jane A; Lupo, Philip J
2018-01-15
It is estimated that 20 to 30% of infants with birth defects have two or more birth defects. Among these infants with multiple congenital anomalies (MCA), co-occurring anomalies may represent either chance (i.e., unrelated etiologies) or pathogenically associated patterns of anomalies. While some MCA patterns have been recognized and described (e.g., known syndromes), others have not been identified or characterized. Elucidating these patterns may result in a better understanding of the etiologies of these MCAs. This article reviews the literature with regard to analytic methods that have been used to evaluate patterns of MCAs, in particular those using birth defect registry data. A popular method for MCA assessment involves a comparison of the observed to expected ratio for a given combination of MCAs, or one of several modified versions of this comparison. Other methods include use of numerical taxonomy or other clustering techniques, multiple regression analysis, and log-linear analysis. Advantages and disadvantages of these approaches, as well as specific applications, were outlined. Despite the availability of multiple analytic approaches, relatively few MCA combinations have been assessed. The availability of large birth defects registries and computing resources that allow for automated, big data strategies for prioritizing MCA patterns may provide for new avenues for better understanding co-occurrence of birth defects. Thus, the selection of an analytic approach may depend on several considerations. Birth Defects Research 110:5-11, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Systematic comparison of static and dynamic headspace sampling techniques for gas chromatography.
Kremser, Andreas; Jochmann, Maik A; Schmidt, Torsten C
2016-09-01
Six automated, headspace-based sample preparation techniques were used to extract volatile analytes from water with the goal of establishing a systematic comparison between commonly available instrumental alternatives. To that end, these six techniques were used in conjunction with the same gas chromatography instrument for analysis of a common set of volatile organic carbon (VOC) analytes. The methods were thereby divided into three classes: static sampling (by syringe or loop), static enrichment (SPME and PAL SPME Arrow), and dynamic enrichment (ITEX and trap sampling). For PAL SPME Arrow, different sorption phase materials were also included in the evaluation. To enable an effective comparison, method detection limits (MDLs), relative standard deviations (RSDs), and extraction yields were determined and are discussed for all techniques. While static sampling techniques exhibited sufficient extraction yields (approx. 10-20 %) to be reliably used down to approx. 100 ng L(-1), enrichment techniques displayed extraction yields of up to 80 %, resulting in MDLs down to the picogram per liter range. RSDs for all techniques were below 27 %. The choice on one of the different instrumental modes of operation (aforementioned classes) was thereby the most influential parameter in terms of extraction yields and MDLs. Individual methods inside each class showed smaller deviations, and the least influences were observed when evaluating different sorption phase materials for the individual enrichment techniques. The option of selecting specialized sorption phase materials may, however, be more important when analyzing analytes with different properties such as high polarity or the capability of specific molecular interactions. Graphical Abstract PAL SPME Arrow during the extraction of volatile analytes from the headspace of an aqueous sample.
Witt, E. C.; Hippe, D.J.; Giovannitti, R.M.
1992-01-01
A total of 304 nutrient samples were collected from May 1990 through September 1991 to determine concentrations and loads of nutrients in water discharged from two spring basins in Cumberland County, Pa. Fifty-four percent of these nutrient samples were for the evaluation of (1) laboratory consistency, (2) container and preservative cleanliness, (3) maintenance of analyte representativeness as affected by three different preservation methods, and (4) comparison of analyte results with the "Most Probable Value" for Standard Reference Water Samples. Results of 37 duplicate analyses indicate that the Pennsylvania Department of Environmental Resources, Bureau of Laboratories (principal laboratory) remained within its ±10 percent goal for all but one analyte. Results of the blank analysis show that the sampling containers did not compromise the water quality. However, mercuric-chloride-preservation blanks apparently contained measurable ammonium in four of five samples and ammonium plus organic nitrogen in two of five samples. Interlaboratory results indicate substantial differences in the determination of nitrate and ammonium plus organic nitrogen between the principal laboratory and the U.S. Geological Survey National Water-Quality Laboratory. In comparison with the U.S. Environmental Protection Agency Quality-Control Samples, the principal laboratory was sufficiently accurate in its determination of nutrient anafytes. Analysis of replicate samples indicated that sulfuric-acid preservative best maintained the representativeness of the anafytes nitrate and ammonium plus organic nitrogen, whereas, mercuric chloride best maintained the representativeness of orthophosphate. Comparison of nutrient analyte determinations with the Most Probable Value for each preservation method shows that two of five analytes with no chemical preservative compare well, three of five with mercuric-chloride preservative compare well, and three of five with sulfuricacid preservative compare well.
The general 2-D moments via integral transform method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
Chen, Xi; Zhao, Liu; Özdemir, Mujgan Sagir; Liang, Haiming
2018-04-05
The resource allocation of air pollution treatment in China is a complex problem, since many alternatives are available and many criteria influence mutually. A number of stakeholders participate in this issue holding different opinions because of the benefits they value. So a method is needed, based on the analytic network process (ANP) and large-group decision-making (LGDM), to rank the alternatives considering interdependent criteria and stakeholders' opinions. In this method, the criteria related to air pollution treatment are examined by experts. Then, the network structure of the problem is constructed based on the relationships between the criteria. Further, every participant in each group provide comparison matrices by judging the importance between criteria according to dominance, regarding a certain criteria (or goal), and the geometric average comparison matrix of each group is obtained. The decision weight of each group is derived by combining the subjective weight and the objective weight, in which the subjective weight is provided by organizers, while the objective weight is determined by considering the consensus levels of groups. The final comparison matrices are obtained by the geometric average of comparison matrices and the decision weights. Next, the resource allocation is made according to the priorities of the alternatives using the super decision software. Finally, an example is given to illustrate the use of the proposed method.
Assessment of regional air quality by a concentration-dependent Pollution Permeation Index
Liang, Chun-Sheng; Liu, Huan; He, Ke-Bin; Ma, Yong-Liang
2016-01-01
Although air quality monitoring networks have been greatly improved, interpreting their expanding data in both simple and efficient ways remains challenging. Therefore, needed are new analytical methods. We developed such a method based on the comparison of pollutant concentrations between target and circum areas (circum comparison for short), and tested its applications by assessing the air pollution in Jing-Jin-Ji, Yangtze River Delta, Pearl River Delta and Cheng-Yu, China during 2015. We found the circum comparison can instantly judge whether a city is a pollution permeation donor or a pollution permeation receptor by a Pollution Permeation Index (PPI). Furthermore, a PPI-related estimated concentration (original concentration plus halved average concentration difference) can be used to identify some overestimations and underestimations. Besides, it can help explain pollution process (e.g., Beijing’s PM2.5 maybe largely promoted by non-local SO2) though not aiming at it. Moreover, it is applicable to any region, easy-to-handle, and able to boost more new analytical methods. These advantages, despite its disadvantages in considering the whole process jointly influenced by complex physical and chemical factors, demonstrate that the PPI based circum comparison can be efficiently used in assessing air pollution by yielding instructive results, without the absolute need for complex operations. PMID:27731344
A Comparison of Interactional Aerodynamics Methods for a Helicopter in Low Speed Flight
NASA Technical Reports Server (NTRS)
Berry, John D.; Letnikov, Victor; Bavykina, Irena; Chaffin, Mark S.
1998-01-01
Recent advances in computing subsonic flow have been applied to helicopter configurations with various degrees of success. This paper is a comparison of two specific methods applied to a particularly challenging regime of helicopter flight, very low speeds, where the interaction of the rotor wake and the fuselage are most significant. Comparisons are made between different methods of predicting the interactional aerodynamics associated with a simple generic helicopter configuration. These comparisons are made using fuselage pressure data from a Mach-scaled powered model helicopter with a rotor diameter of approximately 3 meters. The data shown are for an advance ratio of 0.05 with a thrust coefficient of 0.0066. The results of this comparison show that in this type of complex flow both analytical techniques have regions where they are more accurate in matching the experimental data.
Analytical methods for the development of Reynolds stress closures in turbulence
NASA Technical Reports Server (NTRS)
Speziale, Charles G.
1990-01-01
Analytical methods for the development of Reynolds stress models in turbulence are reviewed in detail. Zero, one and two equation models are discussed along with second-order closures. A strong case is made for the superior predictive capabilities of second-order closure models in comparison to the simpler models. The central points are illustrated by examples from both homogeneous and inhomogeneous turbulence. A discussion of the author's views concerning the progress made in Reynolds stress modeling is also provided along with a brief history of the subject.
Periodized Daubechies wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.
1996-03-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.
Approaching the Limit in Atomic Spectrochemical Analysis.
ERIC Educational Resources Information Center
Hieftje, Gary M.
1982-01-01
To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…
Numerical modeling and analytical evaluation of light absorption by gold nanostars
NASA Astrophysics Data System (ADS)
Zarkov, Sergey; Akchurin, Georgy; Yakunin, Alexander; Avetisyan, Yuri; Akchurin, Garif; Tuchin, Valery
2018-04-01
In this paper, the regularity of local light absorption by gold nanostars (AuNSts) model is studied by method of numerical simulation. The mutual diffraction influence of individual geometric fragments of AuNSts is analyzed. A comparison is made with an approximate analytical approach for estimating the average bulk density of absorbed power and total absorbed power by individual geometric fragments of AuNSts. It is shown that the results of the approximate analytical estimate are in qualitative agreement with the numerical calculations of the light absorption by AuNSts.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.
Batt, Angela L; Furlong, Edward T; Mash, Heath E; Glassmeyer, Susan T; Kolpin, Dana W
2017-02-01
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. Published by Elsevier B.V.
A Comparison of the Bounded Derivative and the Normal Mode Initialization Methods Using Real Data
NASA Technical Reports Server (NTRS)
Semazzi, F. H. M.; Navon, I. M.
1985-01-01
Browning et al. (1980) proposed an initialization method called the bounded derivative method (BDI). They used analytical data to test the new method. Kasahara (1982) theoretically demonstrated the equivalence between BDI and the well known nonlinear normal mode initialization method (NMI). The purposes of this study are the extension of the application of BDI to real data and comparison with NMI. The unbalanced initial state (UBD) is data of January, 1979 OOZ which were interpolated from the adjacent sigma levels of the GLAS GCM to the 300 mb surface. The global barotropic model described by Takacs and Balgovind (1983) is used. Orographic forcing is explicitly included in the model. Many comparisons are performed between various quantities. However, we only present a comparison of the time evolution at two grid points A(50 S, 90 E) and B(10 S, 20 E) which represent low and middle latitude locations. To facilitate a more complete comparison an initialization experiment based on the classical balance equation (CBE) was also included.
Tesija Kuna, Andrea; Dukic, Kristina; Nikolac Gabaj, Nora; Miler, Marijana; Vukasovic, Ines; Langer, Sanja; Simundic, Ana-Maria; Vrkic, Nada
2018-03-08
To compare the analytical performances of the enzymatic method (EM) and capillary electrophoresis (CE) for hemoglobin A1c (HbA1c) measurement. Imprecision, carryover, stability, linearity, method comparison, and interferences were evaluated for HbA1c via EM (Abbott Laboratories, Inc) and CE (Sebia). Both methods have shown overall within-laboratory imprecision of less than 3% for International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units (<2% National Glycohemoglobin Standardization Program [NGSP] units). Carryover effects were within acceptable criteria. The linearity of both methods has proven to be excellent (R2 = 0.999). Significant proportional and constant difference were found for EM, compared with CE, but were not clinically relevant (<5 mmol/mol; NGSP <0.5%). At the clinically relevant HbA1c concentration, stability observed with both methods was acceptable (bias, <3%). Triglyceride levels of 8.11 mmol per L or greater showed to interfere with EM and fetal hemoglobin (HbF) of 10.6% or greater with CE. The enzymatic method proved to be comparable to the CE method in analytical performances; however, certain interferences can influence the measurements of each method.
A Critical Comparison of Some Methods for Interpolation of Scattered Data
1979-12-01
because faster evaluation of the local interpolants is possible. KAll things considered, the method of choice here seems to be the Modified Quadratic...topography and other irregular surfaces," J. of Geophysical Research 76 ( 1971 ) 1905-1915I’ [23) HARDY, Rolland L. - "Analytical topographic surfaces by
Analytical methods for determining individual aldehyde, ketone, and alcohol emissions from gasoline-, methanol-, and variable-fueled vehicles are described. These methods were used in the Auto/Oil Air quality Improvement Research Program to provide emission data for comparison of...
Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T
2017-01-01
Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.
A comparison of several techniques for imputing tree level data
David Gartner
2002-01-01
As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakata, Hiroya, E-mail: hiroya.nakata.gt@kyocera.jp; Nishimoto, Yoshio; Fedorov, Dmitri G.
2016-07-28
The analytic second derivative of the energy is developed for the fragment molecular orbital (FMO) method combined with density-functional tight-binding (DFTB), enabling simulations of infrared and Raman spectra of large molecular systems. The accuracy of the method is established in comparison to full DFTB without fragmentation for a set of representative systems. The performance of the FMO-DFTB Hessian is discussed for molecular systems containing up to 10 041 atoms. The method is applied to the study of the binding of α-cyclodextrin to polyethylene glycol, and the calculated IR spectrum of an epoxy amine oligomer reproduces experiment reasonably well.
NASA Astrophysics Data System (ADS)
Bervillier, C.; Boisseau, B.; Giacomini, H.
2008-02-01
The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.
The transfer of analytical procedures.
Ermer, J; Limberger, M; Lis, K; Wätzig, H
2013-11-01
Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.
Computation of viscous blast wave flowfields
NASA Technical Reports Server (NTRS)
Atwood, Christopher A.
1991-01-01
A method to determine unsteady solutions of the Navier-Stokes equations was developed and applied. The structural finite-volume, approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the interaction of blast-waves with stationary targets. The inviscid flux is evaluated using MacCormack's modified Steger-Warming flux or Roe flux difference splittings with total variation diminishing limiters, while the viscous flux is computed using central differences. The use of implicit boundary conditions in conjunction with a telescoping in time and space method permitted solutions to this strongly unsteady class of problems. Comparisons of numerical, analytical, and experimental results were made in two and three dimensions. These comparisons revealed accurate wave speed resolution with nonoscillatory discontinuity capturing. The purpose of this effort was to address the three-dimensional, viscous blast-wave problem. Test cases were undertaken to reveal these methods' weaknesses in three regimes: (1) viscous-dominated flow; (2) complex unsteady flow; and (3) three-dimensional flow. Comparisons of these computations to analytic and experimental results provided initial validation of the resultant code. Addition details on the numerical method and on the validation can be found in the appendix. Presently, the code is capable of single zone computations with selection of any permutation of solid wall or flow-through boundaries.
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...
2016-07-05
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
A screening tool for delineating subregions of steady recharge within groundwater models
Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky
2014-01-01
We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Miki, K.; Uzawa, K.
2006-11-30
During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less
NASA Astrophysics Data System (ADS)
Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza
2018-06-01
Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.
Quantification of HCV RNA in Liver Tissue by bDNA Assay.
Dailey, P J; Collins, M L; Urdea, M S; Wilber, J C
1999-01-01
With this statement, Sherlock and Dooley have described two of the three major challenges involved in quantitatively measuring any analyte in tissue samples: the distribution of the analyte in the tissue; and the standard of reference, or denominator, with which to make comparisons between tissue samples. The third challenge for quantitative measurement of an analyte in tissue is to ensure reproducible and quantitative recovery of the analyte on extraction from tissue samples. This chapter describes a method that can be used to measure HCV RNA quantitatively in liver biopsy and tissue samples using the bDNA assay. All three of these challenges-distribution, denominator, and recovery-apply to the measurement of HCV RNA in liver biopsies.
Development of the Basis for an Analytical Protocol for Feeds and Products of Bio-oil Hydrotreatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oasmaa, Anja; Kuoppala, Eeva; Elliott, Douglas C.
2012-04-02
Methods for easily following the main changes in the composition, stability, and acidity of bio-oil in hydrotreatment are presented. The correlation to more conventional methods is provided. Depending on the final use the upgrading requirement is different. This will create challenges also for the analytical protocol. Polar pyrolysis liquids and their products can be divided into five main groups with solvent fractionation the change in which is easy to follow. This method has over ten years been successfully used for comparison of fast pyrolysis bio-oil quality, and the changes during handling, and storage, provides the basis of the analytical protocolmore » presented in this paper. The method has most recently been used also for characterisation of bio-oil hydrotreatment products. Discussion on the use of gas chromatographic and spectroscopic methods is provided. In addition, fuel oil analyses suitable for fast pyrolysis bio-oils and hydrotreatment products are discussed.« less
McLain, B.J.
1993-01-01
Graphite furnace atomic absorption spectrophotometry is a sensitive, precise, and accurate method for the determination of chromium in natural water samples. The detection limit for this analytical method is 0.4 microg/L with a working linear limit of 25.0 microg/L. The precision at the detection limit ranges from 20 to 57 percent relative standard deviation (RSD) with an improvement to 4.6 percent RSD for concentrations more than 3 microg/L. Accuracy of this method was determined for a variety of reference standards that was representative of the analytical range. The results were within the established standard deviations. Samples were spiked with known concentrations of chromium with recoveries ranging from 84 to 122 percent. In addition, a comparison of data between graphite furnace atomic absorption spectrophotometry and direct-current plasma atomic emission spectrometry resulted in suitable agreement between the two methods, with an average deviation of +/- 2.0 microg/L throughout the analytical range.
Three Interaction Patterns on Asynchronous Online Discussion Behaviours: A Methodological Comparison
ERIC Educational Resources Information Center
Jo, I.; Park, Y.; Lee, H.
2017-01-01
An asynchronous online discussion (AOD) is one format of instructional methods that facilitate student-centered learning. In the wealth of AOD research, this study evaluated how students' behavior on AOD influences their academic outcomes. This case study compared the differential analytic methods including web log mining, social network analysis…
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. En...
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
Yan, Ying; Han, Bingqing; Zeng, Jie; Zhou, Weiyan; Zhang, Tianjiao; Zhang, Jiangtao; Chen, Wenxiang; Zhang, Chuanbao
2017-08-28
Potassium is an important serum ion that is frequently assayed in clinical laboratories. Quality assurance requires reference methods; thus, the establishment of a candidate reference method for serum potassium measurements is important. An inductively coupled plasma mass spectrometry (ICP-MS) method was developed. Serum samples were gravimetrically spiked with an aluminum internal standard, digested with 69% ultrapure nitric acid, and diluted to the required concentration. The 39K/27Al ratios were measured by ICP-MS in hydrogen mode. The method was calibrated using 5% nitric acid matrix calibrators, and the calibration function was established using the bracketing method. The correlation coefficients between the measured 39K/27Al ratios and the analyte concentration ratios were >0.9999. The coefficients of variation were 0.40%, 0.68%, and 0.22% for the three serum samples, and the analytical recovery was 99.8%. The accuracy of the measurement was also verified by measuring certified reference materials, SRM909b and SRM956b. Comparison with the ion selective electrode routine method and international inter-laboratory comparisons gave satisfied results. The new ICP-MS method is specific, precise, simple, and low-cost, and it may be used as a candidate reference method for standardizing serum potassium measurements.
Fukushima, Romualdo S; Hatfield, Ronald D
2004-06-16
Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.
Determining absolute protein numbers by quantitative fluorescence microscopy.
Verdaasdonk, Jolien Suzanne; Lawrimore, Josh; Bloom, Kerry
2014-01-01
Biological questions are increasingly being addressed using a wide range of quantitative analytical tools to examine protein complex composition. Knowledge of the absolute number of proteins present provides insights into organization, function, and maintenance and is used in mathematical modeling of complex cellular dynamics. In this chapter, we outline and describe three microscopy-based methods for determining absolute protein numbers--fluorescence correlation spectroscopy, stepwise photobleaching, and ratiometric comparison of fluorescence intensity to known standards. In addition, we discuss the various fluorescently labeled proteins that have been used as standards for both stepwise photobleaching and ratiometric comparison analysis. A detailed procedure for determining absolute protein number by ratiometric comparison is outlined in the second half of this chapter. Counting proteins by quantitative microscopy is a relatively simple yet very powerful analytical tool that will increase our understanding of protein complex composition. © 2014 Elsevier Inc. All rights reserved.
Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.
2002-01-01
We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656
Barricklow, Jason; Ryder, Tim F; Furlong, Michael T
2009-08-01
During LC-MS/MS quantification of a small molecule in human urine samples from a clinical study, an unexpected peak was observed to nearly co-elute with the analyte of interest in many study samples. Improved chromatographic resolution revealed the presence of at least 3 non-analyte peaks, which were identified as cysteine metabolites and N-acetyl (mercapturic acid) derivatives thereof. These metabolites produced artifact responses in the parent compound MRM channel due to decomposition in the ionization source of the mass spectrometer. Quantitative comparison of the analyte concentrations in study samples using the original chromatographic method and the improved chromatographic separation method demonstrated that the original method substantially over-estimated the analyte concentration in many cases. The substitution of electrospray ionization (ESI) for atmospheric pressure chemical ionization (APCI) nearly eliminated the source instability of these metabolites, which would have mitigated their interference in the quantification of the analyte, even without chromatographic separation. These results 1) demonstrate the potential for thiol metabolite interferences during the quantification of small molecules in pharmacokinetic samples, and 2) underscore the need to carefully evaluate LC-MS/MS methods for molecules that can undergo metabolism to thiol adducts to ensure that they are not susceptible to such interferences during quantification.
A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing
NASA Astrophysics Data System (ADS)
Chernogorova, Tatiana; Vulkov, Lubin
2009-11-01
This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.
NASA Technical Reports Server (NTRS)
Boyd, D. E.; Rao, C. K. P.
1973-01-01
The derivation and application of a Rayleigh-Ritz modal vibration analysis are presented for ring and/or stringer stiffened noncircular cylindrical shells with arbitrary end conditions. Comparisons with previous results from experimental and analytical studies showed this method of analysis to be accurate for a variety of end conditions. Results indicate a greater effect of rings on natural frequencies than of stringers.
Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.
Besio, W; Aakula, R; Dai, W
2004-01-01
Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.
Investigation of Acoustical Shielding by a Wedge-Shaped Airframe
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John
2006-01-01
Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.
Investigation of Acoustical Shielding by a Wedge-Shaped Airframe
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John
2004-01-01
Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, Norwood B.; Walker, J.F.
1992-01-01
Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Serve, Anja; Pieler, Michael Martin; Benndorf, Dirk; Rapp, Erdmann; Wolff, Michael Werner; Reichl, Udo
2015-11-03
A method for the purification of influenza virus particles using novel magnetic sulfated cellulose particles is presented and compared to an established centrifugation method for analytics. Therefore, purified influenza A virus particles from adherent and suspension MDCK host cell lines were characterized on the protein level with mass spectrometry to compare the viral and residual host cell proteins. Both methods allowed one to identify all 10 influenza A virus proteins, including low-abundance proteins like the matrix protein 2 and nonstructural protein 1, with a similar impurity level of host cell proteins. Compared to the centrifugation method, use of the novel magnetic sulfated cellulose particles reduced the influenza A virus particle purification time from 3.5 h to 30 min before mass spectrometry analysis.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1992-01-01
Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.
Steuer, Andrea E; Forss, Anna-Maria; Dally, Annika M; Kraemer, Thomas
2014-11-01
In the context of driving under the influence of drugs (DUID), not only common drugs of abuse may have an influence, but also medications with similar mechanisms of action. Simultaneous quantification of a variety of drugs and medications relevant in this context allows faster and more effective analyses. Therefore, multi-analyte approaches have gained more and more popularity in recent years. Usually, calibration curves for such procedures contain a mixture of all analytes, which might lead to mutual interferences. In this study we investigated whether the use of such mixtures leads to reliable results for authentic samples containing only one or two analytes. Five hundred microliters of whole blood were extracted by routine solid-phase extraction (SPE, HCX). Analysis was performed on an ABSciex 3200 QTrap instrument with ESI+ in scheduled MRM mode. The method was fully validated according to international guidelines including selectivity, recovery, matrix effects, accuracy and precision, stabilities, and limit of quantification. The selected SPE provided recoveries >60% for all analytes except 6-monoacetylmorphine (MAM) with coefficients of variation (CV) below 15% or 20% for quality controls (QC) LOW and HIGH, respectively. Ion suppression >30% was found for benzoylecgonine, hydrocodone, hydromorphone, MDA, oxycodone, and oxymorphone at QC LOW, however CVs were always below 10% (n=6 different whole blood samples). Accuracy and precision criteria were fulfilled for all analytes except for MAM. Systematic investigation of accuracy determined for QC MED in a multi-analyte mixture compared to samples containing only single analytes revealed no relevant differences for any analyte, indicating that a multi-analyte calibration is suitable for the presented method. Comparison of approximately 60 samples to a former GC-MS method showed good correlation. The newly validated method was successfully applied to more than 1600 routine samples and 3 proficiency tests. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.
1998-01-01
Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.
Kling, Maximilian; Seyring, Nicole; Tzanova, Polia
2016-09-01
Economic instruments provide significant potential for countries with low municipal waste management performance in decreasing landfill rates and increasing recycling rates for municipal waste. In this research, strengths and weaknesses of landfill tax, pay-as-you-throw charging systems, deposit-refund systems and extended producer responsibility schemes are compared, focusing on conditions in countries with low waste management performance. In order to prioritise instruments for implementation in these countries, the analytic hierarchy process is applied using results of a literature review as input for the comparison. The assessment reveals that pay-as-you-throw is the most preferable instrument when utility-related criteria are regarded (wb = 0.35; analytic hierarchy process distributive mode; absolute comparison) mainly owing to its waste prevention effect, closely followed by landfill tax (wb = 0.32). Deposit-refund systems (wb = 0.17) and extended producer responsibility (wb = 0.16) rank third and fourth, with marginal differences owing to their similar nature. When cost-related criteria are additionally included in the comparison, landfill tax seems to provide the highest utility-cost ratio. Data from literature concerning cost (contrary to utility-related criteria) is currently not sufficiently available for a robust ranking according to the utility-cost ratio. In general, the analytic hierarchy process is seen as a suitable method for assessing economic instruments in waste management. Independent from the chosen analytic hierarchy process mode, results provide valuable indications for policy-makers on the application of economic instruments, as well as on their specific strengths and weaknesses. Nevertheless, the instruments need to be put in the country-specific context along with the results of this analytic hierarchy process application before practical decisions are made. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R
2015-08-28
Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.
ERIC Educational Resources Information Center
Garvey, Sarah L.; Shahmohammadi, Golbon; McLain, Derek R.; Dietz, Mark L.
2015-01-01
A laboratory experiment is described in which students compare two methods for the determination of the calcium content of commercial dietary supplement tablets. In a two-week sequence, the sample tablets are first analyzed via complexometric titration with ethylenediaminetetraacetic acid and then, following ion exchange of the calcium ion present…
Comparing Methods for Assessing Forest Soil Net Nitrogen Mineralization and Net Nitrification
S. S. Jefts; I. J. Fernandez; L.E. Rustad; D. B. Dail
2004-01-01
A variety of analytical techniques are used to evaluate rates of nitrogen (N) mineralization and nitrification in soils. The diversity of methods takes on added significance in forest ecosystem research where high soil heterogeneity and multiple soil horizons can make comparisons over time and space even more complex than in agricultural Ap horizons. This study...
Quantifying construction and demolition waste: an analytical review.
Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen
2014-09-01
Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Kim, Sang-Bog; Roche, Jennifer
2013-08-01
Organically bound tritium (OBT) is an important tritium species that can be measured in most environmental samples, but has only recently been recognized as a species of tritium in these samples. Currently, OBT is not routinely measured by environmental monitoring laboratories around the world. There are no certified reference materials (CRMs) for environmental samples. Thus, quality assurance (QA), or verification of the accuracy of the OBT measurement, is not possible. Alternatively, quality control (QC), or verification of the precision of the OBT measurement, can be achieved. In the past, there have been differences in OBT analysis results between environmental laboratories. A possible reason for the discrepancies may be differences in analytical methods. Therefore, inter-laboratory OBT comparisons among the environmental laboratories are important and would provide a good opportunity for adopting a reference OBT analytical procedure. Due to the analytical issues, only limited information is available on OBT measurement. Previously conducted OBT inter-laboratory practices are reviewed and the findings are described. Based on our experiences, a few considerations were suggested for the international OBT inter-laboratory comparison exercise to be completed in the near future. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mccain, W. E.
1984-01-01
The unsteady aerodynamic lifting surface theory, the Doublet Lattice method, with experimental steady and unsteady pressure measurements of a high aspect ratio supercritical wing model at a Mach number of 0.78 were compared. The steady pressure data comparisons were made for incremental changes in angle of attack and control surface deflection. The unsteady pressure data comparisons were made at set angle of attack positions with oscillating control surface deflections. Significant viscous and transonic effects in the experimental aerodynamics which cannot be predicted by the Doublet Lattice method are shown. This study should assist development of empirical correction methods that may be applied to improve Doublet Lattice calculations of lifting surface aerodynamics.
Results of the first provisional technical secretariat interlaboratory comparison test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuff, J.R.; Hoffland, L.
1995-06-01
The principal task of this laboratory in the first Provisional Technical Secretariat (PTS) Interlaboratory Comparison Test was to verify and test the extraction and preparation procedures outlined in the Recommended Operating Procedures for Sampling and Analysis in the Verification of Chemical Disarmament in addition to our laboratory extraction methods and our laboratory analysis methods. Sample preparation began on 16 May 1994 and analysis was completed on 12 June 1994. The analytical methods used included NMR ({sup 1}H and {sup 31}P) GC/AED, GC/MS (EI and methane CI), GC/IRD, HPLC/IC, HPLC/TSP/MS, MS/MS(Electrospray), and CZE.
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
NASA Astrophysics Data System (ADS)
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
NASA Technical Reports Server (NTRS)
Pines, S.
1981-01-01
The methods used to compute the mass, structural stiffness, and aerodynamic forces in the form of influence coefficient matrices as applied to a flutter analysis of the Drones for Aerodynamic and Structural Testing (DAST) Aeroelastic Research Wing. The DAST wing was chosen because wind tunnel flutter test data and zero speed vibration data of the modes and frequencies exist and are available for comparison. A derivation of the equations of motion that can be used to apply the modal method for flutter suppression is included. A comparison of the open loop flutter predictions with both wind tunnel data and other analytical methods is presented.
Steroid hormones in environmental matrices: extraction method comparison.
Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon
2017-11-09
The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.
Thermal conductivity of Rene 41 honeycomb panels
NASA Astrophysics Data System (ADS)
Deriugin, V.
1980-12-01
Effective thermal conductivities of Rene 41 panels suitable for advanced space transportation vehicle structures were determined analytically and experimentally for temperature ranges between 20.4K (423 F) and 1186K (1675 F). The cryogenic data were obtained using a cryostat whereas the high temperature data were measured using a heat flow meter and a comparative thermal conductivity instrument respectively. Comparisons were made between analysis and experimental data. Analytical methods appear to provide reasonable definition of the honeycomb panel effective thermal conductivities.
Thermal conductivity of Rene 41 honeycomb panels. [space transportation vehicles
NASA Technical Reports Server (NTRS)
Deriugin, V.
1980-01-01
Effective thermal conductivities of Rene 41 panels suitable for advanced space transportation vehicle structures were determined analytically and experimentally for temperature ranges between 20.4K (423 F) and 1186K (1675 F). The cryogenic data were obtained using a cryostat whereas the high temperature data were measured using a heat flow meter and a comparative thermal conductivity instrument respectively. Comparisons were made between analysis and experimental data. Analytical methods appear to provide reasonable definition of the honeycomb panel effective thermal conductivities.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
Development and analytical performance evaluation of FREND-SAA and FREND-Hp
NASA Astrophysics Data System (ADS)
Choi, Eunha; Seong, Jihyun; Lee, Seiyoung; Han, Sunmi
2017-07-01
The FREND System is a portable cartridge reader, quantifying analytes by measuring laser-induced fluorescence in a single-use reagent cartridge. The objective of this study was to evaluate FREND-SAA and FREND-Hp assays. The FREND-SAA and Hp assays were standardized to the WHO and IFCC reference materials. Analytical performance studies of Precision, Linearity, Limits of Detections, Interferences, and Method Comparisons for both assays were performed according to the CLSI guidelines. Both assays demonstrated acceptable imprecision of %CV in three different levels of samples. The linearity of the assays was found to be acceptable (SAA 5 150 mg/L, Hp 30 400 mg/dL). The detection limits were 3.8 mg/L (SAA) and 10.2 mg/dL (Hp). No significant interference and no significant deviation from linearity was found in the both comparison studies. In conclusion, NanoEnTek's FREND-SAA and Hp assays represent rapid, accurate and convenient means to quantify SAA and Hp in human serum on FREND system.
NASA Technical Reports Server (NTRS)
Naumann, E. C.; Catherines, D. S.; Walton, W. C., Jr.
1971-01-01
Experimental and analytical investigations of the vibratory behavior of ring-stiffened truncated-cone shells are described. Vibration tests were conducted on 60 deg conical shells having up to four ring stiffeners and for free-free and clamped-free edge constraints and 9 deg conical shells, for two thicknesses, each with two angle rings and for free-free, free-clamped, and clamped-clamped edge constraints. The analytical method is based on linear thin shell theory, employing the Rayleigh-Ritz method. Discrete rings are represented as composed of one or more segments, each of which is a short truncated-cone shell of uniform thickness. Equations of constraint are used to join a ring and shell along a circumferential line connection. Excellent agreement was obtained for comparisons of experimental and calculated frequencies.
Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.
This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less
Andersen, Wendy C; Casey, Christine R; Schneider, Marilyn J; Turnipseed, Sherri B
2015-01-01
Prior to conducting a collaborative study of AOAC First Action 2012.25 LC-MS/MS analytical method for the determination of residues of three triphenylmethane dyes (malachite green, crystal violet, and brilliant green) and their metabolites (leucomalachite green and leucocrystal violet) in seafood, a single-laboratory validation of method 2012.25 was performed to expand the scope of the method to other seafood matrixes including salmon, catfish, tilapia, and shrimp. The validation included the analysis of fortified and incurred residues over multiple weeks to assess analyte stability in matrix at -80°C, a comparison of calibration methods over the range 0.25 to 4 μg/kg, study of matrix effects for analyte quantification, and qualitative identification of targeted analytes. Method accuracy ranged from 88 to 112% with 13% RSD or less for samples fortified at 0.5, 1.0, and 2.0 μg/kg. Analyte identification and determination limits were determined by procedures recommended both by the U. S. Food and Drug Administration and the European Commission. Method detection limits and decision limits ranged from 0.05 to 0.24 μg/kg and 0.08 to 0.54 μg/kg, respectively. AOAC First Action Method 2012.25 with an extracted matrix calibration curve and internal standard correction is suitable for the determination of triphenylmethane dyes and leuco metabolites in salmon, catfish, tilapia, and shrimp by LC-MS/MS at a residue determination level of 0.5 μg/kg or below.
Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling
2012-01-01
An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.
Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique
2018-03-01
Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.
1982-01-01
This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.
Evaluation of analytical performance of a new high-sensitivity immunoassay for cardiac troponin I.
Masotti, Silvia; Prontera, Concetta; Musetti, Veronica; Storti, Simona; Ndreu, Rudina; Zucchelli, Gian Carlo; Passino, Claudio; Clerico, Aldo
2018-02-23
The study aim was to evaluate and compare the analytical performance of the new chemiluminescent immunoassay for cardiac troponin I (cTnI), called Access hs-TnI using DxI platform, with those of Access AccuTnI+3 method, and high-sensitivity (hs) cTnI method for ARCHITECT platform. The limits of blank (LoB), detection (LoD) and quantitation (LoQ) at 10% and 20% CV were evaluated according to international standardized protocols. For the evaluation of analytical performance and comparison of cTnI results, both heparinized plasma samples, collected from healthy subjects and patients with cardiac diseases, and quality control samples distributed in external quality assessment programs were used. LoB, LoD and LoQ at 20% and 10% CV values of the Access hs-cTnI method were 0.6, 1.3, 2.1 and 5.3 ng/L, respectively. Access hs-cTnI method showed analytical performance significantly better than that of Access AccuTnI+3 method and similar results to those of hs ARCHITECT cTnI method. Moreover, the cTnI concentrations measured with Access hs-cTnI method showed close linear regressions with both Access AccuTnI+3 and ARCHITECT hs-cTnI methods, although there were systematic differences between these methods. There was no difference between cTnI values measured by Access hs-cTnI in heparinized plasma and serum samples, whereas there was a significant difference between cTnI values, respectively measured in EDTA and heparin plasma samples. Access hs-cTnI has analytical sensitivity parameters significantly improved compared to Access AccuTnI+3 method and is similar to those of the high-sensitivity method using ARCHITECT platform.
Triangular dislocation: an analytical, artefact-free solution
NASA Astrophysics Data System (ADS)
Nikkhoo, Mehdi; Walter, Thomas R.
2015-05-01
Displacements and stress-field changes associated with earthquakes, volcanoes, landslides and human activity are often simulated using numerical models in an attempt to understand the underlying processes and their governing physics. The application of elastic dislocation theory to these problems, however, may be biased because of numerical instabilities in the calculations. Here, we present a new method that is free of artefact singularities and numerical instabilities in analytical solutions for triangular dislocations (TDs) in both full-space and half-space. We apply the method to both the displacement and the stress fields. The entire 3-D Euclidean space {R}3 is divided into two complementary subspaces, in the sense that in each one, a particular analytical formulation fulfils the requirements for the ideal, artefact-free solution for a TD. The primary advantage of the presented method is that the development of our solutions involves neither numerical approximations nor series expansion methods. As a result, the final outputs are independent of the scale of the input parameters, including the size and position of the dislocation as well as its corresponding slip vector components. Our solutions are therefore well suited for application at various scales in geoscience, physics and engineering. We validate the solutions through comparison to other well-known analytical methods and provide the MATLAB codes.
Comparison of critical methods developed for fatty acid analysis: A review.
Wu, Zhuona; Zhang, Qi; Li, Ning; Pu, Yiqiong; Wang, Bing; Zhang, Tong
2017-01-01
Fatty acids are important nutritional substances and metabolites in living organisms. These acids are abundant in Chinese herbs, such as Brucea javanica, Notopterygium forbesii, Isatis tinctoria, Astragalus membranaceus, and Aconitum szechenyianum. This review illustrates the types of fatty acids and their significant roles in the human body. Many analytical methods are used for the qualitative and quantitative evaluation of fatty acids. Some of the methods used to analyze fatty acids in more than 30 kinds of plants, drugs, and other samples are presented in this paper. These analytical methods include gas chromatography, liquid chromatography, near-infrared spectroscopy, and NMR spectroscopy. The advantages and disadvantages of these techniques are described and compared. This review provides a valuable reference for establishing methods for fatty acid determination. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Polynomial modal analysis of lamellar diffraction gratings in conical mounting.
Randriamihaja, Manjakavola Honore; Granet, Gérard; Edee, Kofi; Raniriharinosy, Karyl
2016-09-01
An efficient numerical modal method for modeling a lamellar grating in conical mounting is presented. Within each region of the grating, the electromagnetic field is expanded onto Legendre polynomials, which allows us to enforce in an exact manner the boundary conditions that determine the eigensolutions. Our code is successfully validated by comparison with results obtained with the analytical modal method.
Taubitz, Jörg; Lüning, Ulrich; Grotemeyer, Jürgen
2004-11-07
Resonance enhanced multi-photon ionization-reflectron time of flight mass spectrometry is the analytical method of choice to observe hydrogen bonded supramolecules in the gas phase when protonation of basic centers competes with cluster formation.
Meyer, M.T.; Lee, E.A.; Ferrell, G.M.; Bumgarner, J.E.; Varns, Jerry
2007-01-01
This report describes the performance of an offline tandem solid-phase extraction (SPE) method and an online SPE method that use liquid chromatography/mass spectrometry for the analysis of 23 and 35 antibiotics, respectively, as used in several water-quality surveys conducted since 1999. In the offline tandem SPE method, normalized concentrations for the quinolone, macrolide, and sulfonamide antibiotics in spiked environmental samples averaged from 81 to 139 percent of the expected spiked concentrations. A modified standard-addition technique was developed to improve the quantitation of the tetracycline antibiotics, which had 'apparent' concentrations that ranged from 185 to 1,200 percent of their expected spiked concentrations in matrix-spiked samples. In the online SPE method, normalized concentrations for the quinolone, macrolide, sulfonamide, and tetracycline antibiotics in matrix-spiked samples averaged from 51 to 142 percent of their expected spiked concentrations, and the beta-lactam antibiotics in matrix-spiked samples averaged from 22 to 76 percent of their expected spiked concentration. Comparison of 44 samples analyzed by both the offline tandem SPE and online SPE methods showed 50 to 100 percent agreement in sample detection for overlapping analytes and 68 to 100 percent agreement in a presence-absence comparison for all analytes. The offline tandem and online SPE methods were compared to an independent method that contains two overlapping antibiotic compounds, sulfamethoxazole and trimethoprim, for 96 and 44 environmental samples, respectively. The offline tandem SPE showed 86 and 92 percent agreement in sample detection and 96 and 98 percent agreement in a presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. The online SPE method showed 57 and 56 percent agreement in sample detection and 72 and 91 percent agreement in presence-absence comparison for sulfamethoxazole and trimethoprim, respectively. A linear regression with an R2 of 0.91 was obtained for trimethoprim concentrations, and an R2 of 0.35 was obtained for sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE and online SPE methods. Linear regressions of trimethoprim and sulfamethoxazole concentrations determined from samples analyzed by the offline tandem SPE method and the independent M3 pharmaceutical method yielded R2 of 0.95 and 0.87, respectively. Regressed comparison of the offline tandem SPE method to the online SPE and M3 methods showed that the online SPE method gave higher concentrations for sulfamethoxazole and trimethoprim than were obtained from the offline tandem SPE or M3 methods.
Haller, Toomas; Leitsalu, Liis; Fischer, Krista; Nuotio, Marja-Liisa; Esko, Tõnu; Boomsma, Dorothea Irene; Kyvik, Kirsten Ohm; Spector, Tim D; Perola, Markus; Metspalu, Andres
2017-01-01
Ancestry information at the individual level can be a valuable resource for personalized medicine, medical, demographical and history research, as well as for tracing back personal history. We report a new method for quantitatively determining personal genetic ancestry based on genome-wide data. Numerical ancestry component scores are assigned to individuals based on comparisons with reference populations. These comparisons are conducted with an existing analytical pipeline making use of genotype phasing, similarity matrix computation and our addition-multidimensional best fitting by MixFit. The method is demonstrated by studying Estonian and Finnish populations in geographical context. We show the main differences in the genetic composition of these otherwise close European populations and how they have influenced each other. The components of our analytical pipeline are freely available computer programs and scripts one of which was developed in house (available at: www.geenivaramu.ee/en/tools/mixfit).
Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure. Parts 1 and 2
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.
2011-01-01
Defining the electromagnetic environment inside a graphite composite fairing due to lightning is of interest to spacecraft developers. This paper is the first in a two part series and studies the shielding effectiveness of a graphite composite model fairing using derived equivalent properties. A frequency domain Method of Moments (MoM) model is developed and comparisons are made with shielding test results obtained using a vehicle-like composite fairing. The comparison results show that the analytical models can adequately predict the test results. Both measured and model data indicate that graphite composite fairings provide significant attenuation to magnetic fields as frequency increases. Diffusion effects are also discussed. Part 2 examines the time domain based effects through the development of a loop based induced field testing and a Transmission-Line-Matrix (TLM) model is developed in the time domain to study how the composite fairing affects lightning induced magnetic fields. Comparisons are made with shielding test results obtained using a vehicle-like composite fairing in the time domain. The comparison results show that the analytical models can adequately predict the test and industry results.
Wu, Xiaobin; Chao, Yan; Wan, Zemin; Wang, Yunxiu; Ma, Yan; Ke, Peifeng; Wu, Xinzhong; Xu, Jianhua; Zhuang, Junhua; Huang, Xianzhang
2016-10-15
Haemoglobin A 1c (HbA 1c ) is widely used in the management of diabetes. Therefore, the reliability and comparability among different analytical methods for its detection have become very important. A comparative evaluation of the analytical performances (precision, linearity, accuracy, method comparison, and interferences including bilirubin, triglyceride, cholesterol, labile HbA 1c (LA 1c ), vitamin C, aspirin, fetal haemoglobin (HbF), and haemoglobin E (Hb E)) were performed on Capillarys 2 Flex Piercing (Capillarys 2FP) (Sebia, France), Tosoh HLC-723 G8 (Tosoh G8) (Tosoh, Japan), Premier Hb9210 (Trinity Biotech, Ireland) and Roche Cobas c501 (Roche c501) (Roche Diagnostics, Germany). A good precision was shown at both low and high HbA 1c levels on all four systems, with all individual CVs below 2% (IFCC units) or 1.5% (NGSP units). Linearity analysis for each analyzer had achieved a good correlation coefficient (R 2 > 0.99) over the entire range tested. The analytical bias of the four systems against the IFCC targets was less than ± 6% (NGSP units), indicating a good accuracy. Method comparison showed a great correlation and agreement between methods. Very high levels of triglycerides and cholesterol (≥ 15.28 and ≥ 8.72 mmol/L, respectively) led to falsely low HbA 1c concentrations on Roche c501. Elevated HbF induced false HbA 1c detection on Capillarys 2FP (> 10%), Tosoh G8 (> 30%), Premier Hb9210 (> 15%), and Roche c501 (> 5%). On Tosoh G8, HbE induced an extra peak on chromatogram, and significantly lower results were reported. The four HbA 1c methods commonly used with commercial analyzers showed a good reliability and comparability, although some interference may falsely alter the result.
NASA Astrophysics Data System (ADS)
Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang
2009-02-01
We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.
NASA Technical Reports Server (NTRS)
Townsend, J. C.
1980-01-01
In order to provide experimental data for comparison with newly developed finite difference methods for computing supersonic flows over aircraft configurations, wind tunnel tests were conducted on four arrow wing models. The models were machined under numeric control to precisely duplicate analytically defined shapes. They were heavily instrumented with pressure orifices at several cross sections ahead of and in the region where there is a gap between the body and the wing trailing edge. The test Mach numbers were 2.36, 2.96, and 4.63. Tabulated pressure data for the complete test series are presented along with selected oil flow photographs. Comparisons of some preliminary numerical results at zero angle of attack show good to excellent agreement with the experimental pressure distributions.
"Dip-and-read" paper-based analytical devices using distance-based detection with color screening.
Yamada, Kentaro; Citterio, Daniel; Henry, Charles S
2018-05-15
An improved paper-based analytical device (PAD) using color screening to enhance device performance is described. Current detection methods for PADs relying on the distance-based signalling motif can be slow due to the assay time being limited by capillary flow rates that wick fluid through the detection zone. For traditional distance-based detection motifs, analysis can take up to 45 min for a channel length of 5 cm. By using a color screening method, quantification with a distance-based PAD can be achieved in minutes through a "dip-and-read" approach. A colorimetric indicator line deposited onto a paper substrate using inkjet-printing undergoes a concentration-dependent colorimetric response for a given analyte. This color intensity-based response has been converted to a distance-based signal by overlaying a color filter with a continuous color intensity gradient matching the color of the developed indicator line. As a proof-of-concept, Ni quantification in welding fume was performed as a model assay. The results of multiple independent user testing gave mean absolute percentage error and average relative standard deviations of 10.5% and 11.2% respectively, which were an improvement over analysis based on simple visual color comparison with a read guide (12.2%, 14.9%). In addition to the analytical performance comparison, an interference study and a shelf life investigation were performed to further demonstrate practical utility. The developed system demonstrates an alternative detection approach for distance-based PADs enabling fast (∼10 min), quantitative, and straightforward assays.
Analytical investigation of aerodynamic characteristics of highly swept wings with separated flow
NASA Technical Reports Server (NTRS)
Reddy, C. S.
1980-01-01
Many modern aircraft designed for supersonic speeds employ highly swept-back and low-aspect-ratio wings with sharp or thin edges. Flow separation occurs near the leading and tip edges of such wings at moderate to high angles of attack. Attempts have been made over the years to develop analytical methods for predicting the aerodynamic characteristics of such aircraft. Before any method can really be useful, it must be tested against a standard set of data to determine its capabilities and limitations. The present work undertakes such an investigation. Three methods are considered: the free-vortex-sheet method (Weber et al., 1975), the vortex-lattice method with suction analogy (Lamar and Gloss, 1975), and the quasi-vortex lattice method of Mehrotra (1977). Both flat and cambered wings of different configurations, for which experimental data are available, are studied and comparisons made.
Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S
2016-03-01
Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik
2017-05-08
Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less
Cellular automatons applied to gas dynamic problems
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Coopersmith, Robert M.; Mclachlan, B. G.
1987-01-01
This paper compares the results of a relatively new computational fluid dynamics method, cellular automatons, with experimental data and analytical results. This technique has been shown to qualitatively predict fluidlike behavior; however, there have been few published comparisons with experiment or other theories. Comparisons are made for a one-dimensional supersonic piston problem, Stokes first problem, and the flow past a normal flat plate. These comparisons are used to assess the ability of the method to accurately model fluid dynamic behavior and to point out its limitations. Reasonable results were obtained for all three test cases, but the fundamental limitations of cellular automatons are numerous. It may be misleading, at this time, to say that cellular automatons are a computationally efficient technique. Other methods, based on continuum or kinetic theory, would also be very efficient if as little of the physics were included.
A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter
NASA Technical Reports Server (NTRS)
Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe
1996-01-01
Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.
Nagata, Takeshi; Fedorov, Dmitri G; Li, Hui; Kitaura, Kazuo
2012-05-28
A new energy expression is proposed for the fragment molecular orbital method interfaced with the polarizable continuum model (FMO/PCM). The solvation free energy is shown to be more accurate on a set of representative polypeptides with neutral and charged residues, in comparison to the original formulation at the same level of the many-body expansion of the electrostatic potential determining the apparent surface charges. The analytic first derivative of the energy with respect to nuclear coordinates is formulated at the second-order Møller-Plesset (MP2) perturbation theory level combined with PCM, for which we derived coupled perturbed Hartree-Fock equations. The accuracy of the analytic gradient is demonstrated on test calculations in comparison to numeric gradient. Geometry optimization of the small Trp-cage protein (PDB: 1L2Y) is performed with FMO/PCM/6-31(+)G(d) at the MP2 and restricted Hartree-Fock with empirical dispersion (RHF/D). The root mean square deviations between the FMO optimized and NMR experimental structure are found to be 0.414 and 0.426 Å for RHF/D and MP2, respectively. The details of the hydrogen bond network in the Trp-cage protein are revealed.
NASA Astrophysics Data System (ADS)
Nagata, Takeshi; Fedorov, Dmitri G.; Li, Hui; Kitaura, Kazuo
2012-05-01
A new energy expression is proposed for the fragment molecular orbital method interfaced with the polarizable continuum model (FMO/PCM). The solvation free energy is shown to be more accurate on a set of representative polypeptides with neutral and charged residues, in comparison to the original formulation at the same level of the many-body expansion of the electrostatic potential determining the apparent surface charges. The analytic first derivative of the energy with respect to nuclear coordinates is formulated at the second-order Møller-Plesset (MP2) perturbation theory level combined with PCM, for which we derived coupled perturbed Hartree-Fock equations. The accuracy of the analytic gradient is demonstrated on test calculations in comparison to numeric gradient. Geometry optimization of the small Trp-cage protein (PDB: 1L2Y) is performed with FMO/PCM/6-31(+)G(d) at the MP2 and restricted Hartree-Fock with empirical dispersion (RHF/D). The root mean square deviations between the FMO optimized and NMR experimental structure are found to be 0.414 and 0.426 Å for RHF/D and MP2, respectively. The details of the hydrogen bond network in the Trp-cage protein are revealed.
Three numerical algorithms were compared to provide a solution of a radiative transfer equation (RTE) for plane albedo (hemispherical reflectance) in semi-infinite one-dimensional plane-parallel layer. Algorithms were based on the invariant imbedding method and two different var...
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
2017-03-01
minimal differences were noted. As discussed above, a “dummy” four- bladed hub was fabricated to permit application of shaker loads to the ARES testbed...experimental data used for comparison was from wind-tunnel testing of a set of Active-Twist Rotor (ATR) blades , which had undergone extensive bench...experimental measurements, one low-speed and the other high-speed. Although these blades are capable of actively twisting during flight, in both of these
Comparison between cachaça and rum using pattern recognition methods.
Cardoso, Daniel R; Andrade-Sobrinho, Luiz G; Leite-Neto, Alexandre F; Reche, Roni V; Isique, William D; Ferreira, Marcia M C; Lima-Neto, Benedito S; Franco, Douglas W
2004-06-02
The differentiation between cachaça and rum using analytical data referred to alcohols (methanol, propanol, isobutanol, and isopentanol), acetaldehyde, ethyl acetate, organic acids (octanoic acid, decanoic acid, and dodecanoic acid), metals (Al, Ca, Co, Cu, Cr, Fe, Mg, Mn, Ni, Na, and Zn), and polyphenols (protocatechuic acid, sinapaldehyde, syringaldehyde, ellagic acid, syringic acid, gallic acid, (-)-epicatechin, vanillic acid, vanillin, p-coumaric acid, coniferaldehyde, coniferyl alcohol, kaempferol, and quercetin) is described. The organic and metal analyte contents were determined in 18 cachaça and 21 rum samples using chromatographic methods (GC-MS, GC-FID, and HPLC-UV-vis) and inductively coupled plasma atomic emission spectrometry, respectively. The analytical data of the above compounds, when treated by principal component analysis, hierarchical cluster analysis, discriminant analysis, and K-nearest neighbor analysis, provide a very good discrimination between the two classes of beverages.
NASA Astrophysics Data System (ADS)
Fan, Fan; Ma, Yong; Dai, Xiaobing; Mei, Xiaoguang
2018-04-01
Infrared image enhancement is an important and necessary task in the infrared imaging system. In this paper, by defining the contrast in terms of the area between adjacent non-zero histogram, a novel analytical model is proposed to enlarge the areas so that the contrast can be increased. In addition, the analytical model is regularized by a penalty term based on the saliency value to enhance the salient regions as well. Thus, both of the whole images and salient regions can be enhanced, and the rank consistency can be preserved. The comparisons on 8-bit images show that the proposed method can enhance the infrared images with more details.
Analytical and multibody modeling for the power analysis of standing jumps.
Palmieri, G; Callegari, M; Fioretti, S
2015-01-01
Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.
Method of sections in analytical calculations of pneumatic tires
NASA Astrophysics Data System (ADS)
Tarasov, V. N.; Boyarkina, I. V.
2018-01-01
Analytical calculations in the pneumatic tire theory are more preferable in comparison with experimental methods. The method of section of a pneumatic tire shell allows to obtain equations of intensities of internal forces in carcass elements and bead rings. Analytical dependencies of intensity of distributed forces have been obtained in tire equator points, on side walls (poles) and pneumatic tire bead rings. Along with planes in the capacity of secant surfaces cylindrical surfaces are used for the first time together with secant planes. The tire capacity equation has been obtained using the method of section, by means of which a contact body is cut off from the tire carcass along the contact perimeter by the surface which is normal to the bearing surface. It has been established that the Laplace equation for the solution of tasks of this class of pneumatic tires contains two unknown values that requires the generation of additional equations. The developed computational schemes of pneumatic tire sections and new equations allow to accelerate the pneumatic tire structure improvement process during engineering.
Study on the radial vibration and acoustic field of an isotropic circular ring radiator.
Lin, Shuyu; Xu, Long
2012-01-01
Based on the exact analytical theory, the radial vibration of an isotropic circular ring is studied and its electro-mechanical equivalent circuit is obtained. By means of the equivalent circuit model, the resonance frequency equation is derived; the relationship between the radial resonance frequency, the radial displacement amplitude magnification and the geometrical dimensions, the material property is analyzed. For comparison, numerical method is used to simulate the radial vibration of isotropic circular rings. The resonance frequency and the radial vibrational displacement distribution are obtained, and the radial radiation acoustic field of the circular ring in radial vibration is simulated. It is illustrated that the radial resonance frequencies from the analytical method and the numerical method are in good agreement when the height is much less than the radius. When the height becomes large relative to the radius, the frequency deviation from the two methods becomes large. The reason is that the exact analytical theory is limited to thin circular ring whose height must be much less than its radius. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.
Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik
2016-01-01
Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.
Transport methods and interactions for space radiations
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Schimmerling, Walter S.; Khandelwal, Govind S.; Khan, Ferdous S.; Nealy, John E.; Cucinotta, Francis A.; Simonsen, Lisa C.; Shinn, Judy L.; Norbury, John W.
1991-01-01
A review of the program in space radiation protection at the Langley Research Center is given. The relevant Boltzmann equations are given with a discussion of approximation procedures for space applications. The interaction coefficients are related to solution of the many-body Schroedinger equation with nuclear and electromagnetic forces. Various solution techniques are discussed to obtain relevant interaction cross sections with extensive comparison with experiments. Solution techniques for the Boltzmann equations are discussed in detail. Transport computer code validation is discussed through analytical benchmarking, comparison with other codes, comparison with laboratory experiments and measurements in space. Applications to lunar and Mars missions are discussed.
NASA Astrophysics Data System (ADS)
Kotseva, V. I.
Survey, analysis and comparison of 15 types of intermediate orbits used in the satellite movement theories for the purposes both of the geodesy and geodynamics have been made. The paper is a continuation of the investigations directed to practical realization both of analytical and semi-analytical methods for satellite orbit determination. It is indicated that the intermediate orbit proposed and elaborated by Aksenov, Grebenikov and Demin has got some good qualities and priorities over all the rest intermediate orbits.
Bonomo, Anthony L; Isakson, Marcia J; Chotiros, Nicholas P
2015-04-01
The finite element method is used to model acoustic scattering from rough poroelastic surfaces. Both monostatic and bistatic scattering strengths are calculated and compared with three analytic models: Perturbation theory, the Kirchhoff approximation, and the small-slope approximation. It is found that the small-slope approximation is in very close agreement with the finite element results for all cases studied and that perturbation theory and the Kirchhoff approximation can be considered valid in those instances where their predictions match those given by the small-slope approximation.
Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju
2018-06-01
To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Ledermüller, Katrin; Schütz, Martin
2014-04-28
A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.
Conditions for synchronization in Josephson-junction arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chernikov, A.A.; Schmidt, G.
An effective perturbation theoretical method has been developed to study the dynamics of Josephson Junction series arrays. It is shown that the inclusion of Junction capacitances, often ignored, has a significant impact on synchronization. Comparison of analytic with computational results over a wide range of parameters shows excellent agreement.
Boiano, J M; Wallace, M E; Sieber, W K; Groff, J H; Wang, J; Ashley, K
2000-08-01
A field study was conducted with the goal of comparing the performance of three recently developed or modified sampling and analytical methods for the determination of airborne hexavalent chromium (Cr(VI)). The study was carried out in a hard chrome electroplating facility and in a jet engine manufacturing facility where airborne Cr(VI) was expected to be present. The analytical methods evaluated included two laboratory-based procedures (OSHA Method ID-215 and NIOSH Method 7605) and a field-portable method (NIOSH Method 7703). These three methods employ an identical sampling methodology: collection of Cr(VI)-containing aerosol on a polyvinyl chloride (PVC) filter housed in a sampling cassette, which is connected to a personal sampling pump calibrated at an appropriate flow rate. The basis of the analytical methods for all three methods involves extraction of the PVC filter in alkaline buffer solution, chemical isolation of the Cr(VI) ion, complexation of the Cr(VI) ion with 1,5-diphenylcarbazide, and spectrometric measurement of the violet chromium diphenylcarbazone complex at 540 nm. However, there are notable specific differences within the sample preparation procedures used in three methods. To assess the comparability of the three measurement protocols, a total of 20 side-by-side air samples were collected, equally divided between a chromic acid electroplating operation and a spray paint operation where water soluble forms of Cr(VI) were used. A range of Cr(VI) concentrations from 0.6 to 960 microg m(-3), with Cr(VI) mass loadings ranging from 0.4 to 32 microg, was measured at the two operations. The equivalence of the means of the log-transformed Cr(VI) concentrations obtained from the different analytical methods was compared. Based on analysis of variance (ANOVA) results, no statistically significant differences were observed between mean values measured using each of the three methods. Small but statistically significant differences were observed between results obtained from performance evaluation samples for the NIOSH field method and the OSHA laboratory method.
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
A laser interferometer for measuring skin friction in three-dimensional flows
NASA Technical Reports Server (NTRS)
Monson, D. J.
1983-01-01
A new, nonintrusive method is described for measuring skin friction in three-dimensional flows with unknown direction. The method uses a laser interferometer to measure the changing slope of a thin oil film applied to a surface experiencing shear stress. The details of the method are described, and skin friction measurements taken in a swirling three-dimensional boundary-layer flow are presented. Comparisons between analytical results and experimental values from the laser interferometer method and from a bidirectional surface-fence gauge are made.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
Biyeyeme Bi Mve, Marie-Jeanne; Cloutier, Yves; Lacombe, Nancy; Lavoie, Jacques; Debia, Maximilien; Marchand, Geneviève
2016-12-01
Heating, ventilation, and air-conditioning (HVAC) systems contain dust that can be contaminated with fungal spores (molds), which may have harmful effects on the respiratory health of the occupants of a building. HVAC cleaning is often based on visual inspection of the quantity of dust, without taking the mold content into account. The purpose of this study is to propose a method to estimate fungal contamination of dust in HVAC systems. Comparisons of different analytical methods were carried out on dust deposited in a controlled-atmosphere exposure chamber. Sixty samples were analyzed using four methods: culture, direct microscopic spore count (DMSC), β-N-acetylhexosaminidase (NAHA) dosing and qPCR. For each method, the limit of detection, replicability, and repeatability were assessed. The Pearson correlation coefficients between the methods were also evaluated. Depending on the analytical method, mean spore concentrations per 100 cm 2 of dust ranged from 10,000 to 682,000. Limits of detection varied from 120 to 217,000 spores/100 cm 2 . Replicability and repeatability were between 1 and 15%. Pearson correlation coefficients varied from -0.217 to 0.83. The 18S qPCR showed the best sensitivity and precision, as well as the best correlation with the culture method. PCR targets only molds, and a total count of fungal DNA is obtained. Among the methods, mold DNA amplification by qPCR is the method suggested for estimating the fungal content found in dust of HVAC systems.
Role and Evaluation of Interlaboratory Comparison Results in Laboratory Accreditation
NASA Astrophysics Data System (ADS)
Bode, P.
2008-08-01
Participation in interlaboratory comparisons provides laboratories an opportunity for independent assessment of their analytical performance, both in absolute way and in comparison with those by other techniques. However, such comparisons are hindered by differences in the way laboratories participate, e.g. at best measurement capability or under routine conditions. Neutron activation analysis laboratories, determining total mass fractions, often see themselves classified as `outliers' since the majority of other participants employ techniques with incomplete digestion methods. These considerations are discussed in relation to the way results from interlaboratory comparisons are evaluated by accreditation bodies following the requirements of Clause 5.9.1 of the ISO/IEC 17025:2005. The discussion and conclusions come largely forth from experiences in the author's own laboratory.
Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed
2017-11-01
There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika
2011-02-15
A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe
2014-05-01
The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna
2018-06-05
Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.
An analytical study of the endoreversible Curzon-Ahlborn cycle for a non-linear heat transfer law
NASA Astrophysics Data System (ADS)
Páez-Hernández, Ricardo T.; Portillo-Díaz, Pedro; Ladino-Luna, Delfino; Ramírez-Rojas, Alejandro; Pacheco-Paez, Juan C.
2016-01-01
In the present article, an endoreversible Curzon-Ahlborn engine is studied by considering a non-linear heat transfer law, particularly the Dulong-Petit heat transfer law, using the `componendo and dividendo' rule as well as a simple differentiation to obtain the Curzon-Ahlborn efficiency as proposed by Agrawal in 2009. This rule is actually a change of variable that simplifies a two-variable problem to a one-variable problem. From elemental calculus, we obtain an analytical expression of efficiency and the power output. The efficiency is given only in terms of the temperatures of the reservoirs, such as both Carnot and Curzon-Ahlborn cycles. We make a comparison between efficiencies measured in real power plants and theoretical values from analytical expressions obtained in this article and others found in literature from several other authors. This comparison shows that the theoretical values of efficiency are close to real efficiency, and in some cases, they are exactly the same. Therefore, we can say that the Agrawal method is good in calculating thermal engine efficiencies approximately.
Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A
2012-02-28
A mathematical modeling of controlled release of drug from one-layer torus-shaped devices is presented. Analytical solutions based on Refined Integral Method (RIM) are derived. The validity and utility of the model are ascertained by comparison of the simulation results with matrix-type vaginal rings experimental release data reported in the literature. For the comparisons, the pair-wise procedure is used to measure quantitatively the fit of the theoretical predictions to the experimental data. A good agreement between the model prediction and the experimental data is observed. A comparison with a previously reported model is also presented. More accurate results are achieved for small A/C(s) ratios. Copyright © 2011 Elsevier B.V. All rights reserved.
Portal scatter to primary dose ratio of 4 to 18 MV photon spectra incident on heterogeneous phantoms
NASA Astrophysics Data System (ADS)
Ozard, Siobhan R.
Electronic portal imagers designed and used to verify the positioning of a cancer patient undergoing radiation treatment can also be employed to measure the in vivo dose received by the patient. This thesis investigates the ratio of the dose from patient-scattered particles to the dose from primary (unscattered) photons at the imaging plane, called the scatter to primary dose ratio (SPR). The composition of the SPR according to the origin of scatter is analyzed more thoroughly than in previous studies. A new analytical method for calculating the SPR is developed and experimentally verified for heterogeneous phantoms. A novel technique that applies the analytical SPR method for in vivo dosimetry with a portal imager is evaluated. Monte Carlo simulation was used to determine the imager dose from patient-generated electrons and photons that scatter one or more times within the object. The database of SPRs reported from this investigation is new since the contribution from patient-generated electrons was neglected by previous Monte Carlo studies. The SPR from patient-generated electrons was found here to be as large as 0.03. The analytical SPR method relies on the established result that the scatter dose is uniform for an air gap between the patient and the imager that is greater than 50 cm. This method also applies the hypothesis that first-order Compton scatter only, is sufficient for scatter estimation. A comparison of analytical and measured SPRs for neck, thorax, and pelvis phantoms showed that the maximum difference was within +/-0.03, and the mean difference was less than +/-0.01 for most cases. This accuracy was comparable to similar analytical approaches that are limited to homogeneous phantoms. The analytical SPR method could replace lookup tables of measured scatter doses that can require significant time to measure. In vivo doses were calculated by combining our analytical SPR method and the convolution/superposition algorithm. Our calculated in vivo doses agreed within +/-3% with the doses measured in the phantom. The present in vivo method was faster compared to other techniques that use convolution/superposition. Our method is a feasible and satisfactory approach that contributes to on-line patient dose monitoring.
Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer
2016-09-01
Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Double-multiple streamtube model for studying vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Paraschivoiu, Ion
1988-08-01
This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.
NASA Technical Reports Server (NTRS)
Leong, Harrison Monfook
1988-01-01
General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.
NASA Astrophysics Data System (ADS)
Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei
2018-06-01
In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.
Macro elemental analysis of food samples by nuclear analytical technique
NASA Astrophysics Data System (ADS)
Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.
2017-06-01
Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.
NASA Technical Reports Server (NTRS)
Sadunas, J. A.; French, E. P.; Sexton, H.
1973-01-01
A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.
Buckling Testing and Analysis of Space Shuttle Solid Rocket Motor Cylinders
NASA Technical Reports Server (NTRS)
Weidner, Thomas J.; Larsen, David V.; McCool, Alex (Technical Monitor)
2002-01-01
A series of full-scale buckling tests were performed on the space shuttle Reusable Solid Rocket Motor (RSRM) cylinders. The tests were performed to determine the buckling capability of the cylinders and to provide data for analytical comparison. A nonlinear ANSYS Finite Element Analysis (FEA) model was used to represent and evaluate the testing. Analytical results demonstrated excellent correlation to test results, predicting the failure load within 5%. The analytical value was on the conservative side, predicting a lower failure load than was applied to the test. The resulting study and analysis indicated the important parameters for FEA to accurately predict buckling failure. The resulting method was subsequently used to establish the pre-launch buckling capability of the space shuttle system.
Analytical and numerical solution for wave reflection from a porous wave absorber
NASA Astrophysics Data System (ADS)
Magdalena, Ikha; Roque, Marian P.
2018-03-01
In this paper, wave reflection from a porous wave absorber is investigated theoretically and numerically. The equations that we used are based on shallow water type model. Modification of motion inside the absorber is by including linearized friction term in momentum equation and introducing a filtered velocity. Here, an analytical solution for wave reflection coefficient from a porous wave absorber over a flat bottom is derived. Numerically, we solve the equations using the finite volume method on a staggered grid. To validate our numerical model, comparison of the numerical reflection coefficient is made against the analytical solution. Further, we implement our numerical scheme to study the evolution of surface waves pass through a porous absorber over varied bottom topography.
Flows of Newtonian and Power-Law Fluids in Symmetrically Corrugated Cappilary Fissures and Tubes
NASA Astrophysics Data System (ADS)
Walicka, A.
2018-02-01
In this paper, an analytical method for deriving the relationships between the pressure drop and the volumetric flow rate in laminar flow regimes of Newtonian and power-law fluids through symmetrically corrugated capillary fissures and tubes is presented. This method, which is general with regard to fluid and capillary shape, can also be used as a foundation for different fluids, fissures and tubes. It can also be a good base for numerical integration when analytical expressions are hard to obtain due to mathematical complexities. Five converging-diverging or diverging-converging geometrics, viz. wedge and cone, parabolic, hyperbolic, hyperbolic cosine and cosine curve, are used as examples to illustrate the application of this method. For the wedge and cone geometry the present results for the power-law fluid were compared with the results obtained by another method; this comparison indicates a good compatibility between both the results.
A modeling approach to compare ΣPCB concentrations between congener-specific analyses
Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.
2017-01-01
Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time.
Nagata, Koichi; Pethel, Timothy D
2017-07-01
Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoenig, M.; Elsen, Y.V.; Cauter, R.V.
The progressive degradation of the pyrolytic graphite surface of atomizers provides variable and misleading results of molybdenum peak-height measurements. The changes in the peak shapes produce no analytical problems during the lifetime of the atomizer (approx.300 firings) when integrated absorbance (A.s signals) is considered and the possible base-line drifts are controlled. This was demonstrated on plant samples mineralized by simple digestion with a mixture of HNO/sub 3/ and H/sub 2/O/sub 2/. The value of this method was assessed by comparison with a standard dry oxidation method and by molybdenum determination in National Bureau of Standards reference plant samples. The relativemore » standard deviations (n = 5) of the full analytical procedure do not exceed 7%. 13 references, 3 figures, 3 tables.« less
Approximate analytical solutions in the analysis of thin elastic plates
NASA Astrophysics Data System (ADS)
Goloskokov, Dmitriy P.; Matrosov, Alexander V.
2018-05-01
Two approaches to the construction of approximate analytical solutions for bending of a rectangular thin plate are presented: the superposition method based on the method of initial functions (MIF) and the one built using the Green's function in the form of orthogonal series. Comparison of two approaches is carried out by analyzing a square plate clamped along its contour. Behavior of the moment and the shear force in the neighborhood of the corner points is discussed. It is shown that both solutions give identical results at all points of the plate except for the neighborhoods of the corner points. There are differences in the values of bending moments and generalized shearing forces in the neighborhoods of the corner points.
Analytical methods capable of trace measurement of semi-volatile organic compounds (SOCs) are necessary to assess the exposure of tadpoles to contaminants as a result of long-range and regional atmospheric transport and deposition. The following study compares the results of two ...
Assessment of crack opening area for leak rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharples, J.K.; Bouchard, P.J.
1997-04-01
This paper outlines the background to recommended crack opening area solutions given in a proposed revision to leak before break guidance for the R6 procedure. Comparisons with experimental and analytical results are given for some selected cases of circumferential cracks in cylinders. It is shown that elastic models can provide satisfactory estimations of crack opening displacement (and area) but they become increasingly conservative for values of L{sub r} greater than approximately 0.4. The Dugdale small scale yielding model gives conservative estimates of crack opening displacement with increasing enhancement for L{sub r} values greater than 0.4. Further validation of the elastic-plasticmore » reference stress method for up to L{sub r} values of about 1.0 is presented by experimental and analytical comparisons. Although a more detailed method, its application gives a best estimate of crack opening displacement which may be substantially greater than small scale plasticity models. It is also shown that the local boundary conditions in pipework need to be carefully considered when evaluating crack opening area for through-wall bending stresses resulting from welding residual stresses or geometry discontinuities.« less
Numerical simulation of KdV equation by finite difference method
NASA Astrophysics Data System (ADS)
Yokus, A.; Bulut, H.
2018-05-01
In this study, the numerical solutions to the KdV equation with dual power nonlinearity by using the finite difference method are obtained. Discretize equation is presented in the form of finite difference operators. The numerical solutions are secured via the analytical solution to the KdV equation with dual power nonlinearity which is present in the literature. Through the Fourier-Von Neumann technique and linear stable, we have seen that the FDM is stable. Accuracy of the method is analyzed via the L2 and L_{∞} norm errors. The numerical, exact approximations and absolute error are presented in tables. We compare the numerical solutions with the exact solutions and this comparison is supported with the graphic plots. Under the choice of suitable values of parameters, the 2D and 3D surfaces for the used analytical solution are plotted.
Graphite nanocomposites sensor for multiplex detection of antioxidants in food.
Ng, Khan Loon; Tan, Guan Huat; Khor, Sook Mei
2017-12-15
Butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT), and tert-butylhydroquinone (TBHQ) are synthetic antioxidants used in the food industry. Herein, we describe the development of a novel graphite nanocomposite-based electrochemical sensor for the multiplex detection and measurement of BHA, BHT, and TBHQ levels in complex food samples using a linear sweep voltammetry technique. Moreover, our newly established analytical method exhibited good sensitivity, limit of detection, limit of quantitation, and selectivity. The accuracy and reliability of analytical results were challenged by method validation and comparison with the results of the liquid chromatography method, where a linear correlation of more than 0.99 was achieved. The addition of sodium dodecyl sulfate as supporting additive further enhanced the LSV response (anodic peak current, I pa ) of BHA and BHT by 2- and 20-times, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dziadosz, Marek
2018-01-01
The aim of this work was to develop a fast, cost-effective and time-saving liquid chromatography-tandem mass spectrometry (LC-MS/MS) analytical method for the analysis of ethylene glycol (EG) in human serum. For these purposes, the formation/fragmentation of an EG adduct ion with sodium and sodium acetate was applied in the positive electrospray mode for signal detection. Adduct identification was performed with appropriate infusion experiments based on analyte solutions prepared in different concentrations. Corresponding analyte adduct ions and adduct ion fragments could be identified both for EG and the deuterated internal standard (EG-D4). Protein precipitation was used as sample preparation. The analysis of the supernatant was performed with a Luna 5μm C18 (2) 100A, 150mm×2mm analytical column and a mobile phase consisting of 95% A (H 2 O/methanol=95/5, v/v) and 5% B (H 2 O/methanol=3/97, v/v), both with 10mmolL -1 ammonium acetate and 0.1% acetic acid. Method linearity was examined in the range of 100-4000μg/mL and the calculated limit of detection/quantification was 35/98μg/mL. However, on the basis of the signal to noise ratio, quantification was recommended at a limit of 300μg/mL. Additionally, the examined precision, accuracy, stability, selectivity and matrix effect demonstrated that the method is a practicable alternative for EG quantification in human serum. In comparison to other methods based on liquid chromatography, the strategy presented made for the first time the EG analysis without analyte derivatisation possible. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.
1978-01-01
The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.
2013-04-02
photometric particle counting instrument, DustTrak, to the established OSHA modified NIOSH P&CAM 304 method to determine correlation between the two...study compared the non-specific, rapid photometric particle counting instrument, DustTrak, to the established OSHA modified NIOSH P&CAM 304 method...mask confidence training (27) . This study will compare a direct reading, non-specific photometric particle count instrument (DustTrak TSI Model
NASA Technical Reports Server (NTRS)
Aniversario, R. B.; Harvey, S. T.; Mccarty, J. E.; Parsons, J. T.; Peterson, D. C.; Pritchett, L. D.; Wilson, D. R.; Wogulis, E. R.
1983-01-01
The horizontal stabilizer of the 737 transport was redesigned. Five shipsets were fabricated using composite materials. Weight reduction greater than the 20% goal was achieved. Parts and assemblies were readily produced on production-type tooling. Quality assurance methods were demonstrated. Repair methods were developed and demonstrated. Strength and stiffness analytical methods were substantiated by comparison with test results. Cost data was accumulated in a semiproduction environment. FAA certification was obtained.
Lewis, Nathan S
2004-09-01
Arrays of broadly cross-reactive vapor sensors provide a man-made implementation of an olfactory system, in which an analyte elicits a response from many receptors and each receptor responds to a variety of analytes. Pattern recognition methods are then used to detect analytes based on the collective response of the sensor array. With the use of this architecture, arrays of chemically sensitive resistors made from composites of conductors and insulating organic polymers have been shown to robustly classify, identify, and quantify a diverse collection of organic vapors, even though no individual sensor responds selectively to a particular analyte. The properties and functioning of these arrays are inspired by advances in the understanding of biological olfaction, and in turn, evaluation of the performance of the man-made array provides suggestions regarding some of the fundamental odor detection principles of the mammalian olfactory system.
Song, Hongjun; Wang, Yi; Pant, Kapil
2011-01-01
This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection–diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability. PMID:22247719
Song, Hongjun; Wang, Yi; Pant, Kapil
2012-01-01
This article presents a three-dimensional analytical model to investigate cross-stream diffusion transport in rectangular microchannels with arbitrary aspect ratios under pressure-driven flow. The Fourier series solution to the three-dimensional convection-diffusion equation is obtained using a double integral transformation method and associated eigensystem calculation. A phase diagram derived from the dimensional analysis is presented to thoroughly interrogate the characteristics in various transport regimes and examine the validity of the model. The analytical model is verified against both experimental and numerical models in terms of the concentration profile, diffusion scaling law, and mixing efficiency with excellent agreement (with <0.5% relative error). Quantitative comparison against other prior analytical models in extensive parameter space is also performed, which demonstrates that the present model accommodates much broader transport regimes with significantly enhanced applicability.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
NASA Technical Reports Server (NTRS)
Neustein, Joseph; Schafer, Louis J , Jr
1946-01-01
Several methods of predicting the compressible-flow pressure loss across a baffled aircraft-engine cylinder were analytically related and were experimentally investigated on a typical air-cooled aircraft-engine cylinder. Tests with and without heat transfer covered a wide range of cooling-air flows and simulated altitudes from sea level to 40,000 feet. Both the analysis and the test results showed that the method based on the density determined by the static pressure and the stagnation temperature at the baffle exit gave results comparable with those obtained from methods derived by one-dimensional-flow theory. The method based on a characteristic Mach number, although related analytically to one-dimensional-flow theory, was found impractical in the present tests because of the difficulty encountered in defining the proper characteristic state of the cooling air. Accurate predictions of altitude pressure loss can apparently be made by these methods, provided that they are based on the results of sea-level tests with heat transfer.
Development of a point-kinetic verification scheme for nuclear reactor applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demazière, C., E-mail: demaz@chalmers.se; Dykin, V.; Jareteg, K.
In this paper, a new method that can be used for checking the proper implementation of time- or frequency-dependent neutron transport models and for verifying their ability to recover some basic reactor physics properties is proposed. This method makes use of the application of a stationary perturbation to the system at a given frequency and extraction of the point-kinetic component of the system response. Even for strongly heterogeneous systems for which an analytical solution does not exist, the point-kinetic component follows, as a function of frequency, a simple analytical form. The comparison between the extracted point-kinetic component and its expectedmore » analytical form provides an opportunity to verify and validate neutron transport solvers. The proposed method is tested on two diffusion-based codes, one working in the time domain and the other working in the frequency domain. As long as the applied perturbation has a non-zero reactivity effect, it is demonstrated that the method can be successfully applied to verify and validate time- or frequency-dependent neutron transport solvers. Although the method is demonstrated in the present paper in a diffusion theory framework, higher order neutron transport methods could be verified based on the same principles.« less
Modeling of dispersed-drug delivery from planar polymeric systems: optimizing analytical solutions.
Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A; Cabrera, María I; Grau, Ricardo J A
2010-11-15
Analytical solutions for the case of controlled dispersed-drug release from planar non-erodible polymeric matrices, based on Refined Integral Method, are presented. A new adjusting equation is used for the dissolved drug concentration profile in the depletion zone. The set of equations match the available exact solution. In order to illustrate the usefulness of this model, comparisons with experimental profiles reported in the literature are presented. The obtained results show that the model can be employed in a broad range of applicability. Copyright © 2010 Elsevier B.V. All rights reserved.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
The mean and variance of phylogenetic diversity under rarefaction
Matsen, Frederick A.
2013-01-01
Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Experimental and Analytical Determinations of Spiral Bevel Gear-Tooth Bending Stress Compared
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
2000-01-01
Spiral bevel gears are currently used in all main-rotor drive systems for rotorcraft produced in the United States. Applications such as these need spiral bevel gears to turn the corner from the horizontal gas turbine engine to the vertical rotor shaft. These gears must typically operate at extremely high rotational speeds and carry high power levels. With these difficult operating conditions, an improved analytical capability is paramount to increasing aircraft safety and reliability. Also, literature on the analysis and testing of spiral bevel gears has been very sparse in comparison to that for parallel axis gears. This is due to the complex geometry of this type of gear and to the specialized test equipment necessary to test these components. To develop an analytical model of spiral bevel gears, researchers use differential geometry methods to model the manufacturing kinematics. A three-dimensional spiral bevel gear modeling method was developed that uses finite elements for the structural analysis. This method was used to analyze the three-dimensional contact pattern between the test pinion and gear used in the Spiral Bevel Gear Test Facility at the NASA Glenn Research Center at Lewis Field. Results of this analysis are illustrated in the preceding figure. The development of the analytical method was a joint endeavor between NASA Glenn, the U.S. Army Research Laboratory, and the University of North Dakota.
NASA Astrophysics Data System (ADS)
Belov, S. Yu.; Belova, I. N.
2017-11-01
Monitoring of the earth's surface by remote sensing in the short-wave band can provide quick identification of some characteristics of natural systems. This band range allows one to diagnose subsurface aspects of the earth, as the scattering parameter is affected by irregularities in the dielectric permittivity of subsurface structures. This method based on the organization of the monitoring probe may detect changes in these environments, for example, to assess seismic hazard, hazardous natural phenomena such as earthquakes, as well as some man-made hazards and etc. The problem of measuring and accounting for the scattering power of the earth's surface in the short-range of radio waves is important for a number of purposes, such as diagnosing properties of the medium, which is of interest for geological, environmental studies. In this paper, we propose a new method for estimating the parameters of incoherent signal/noise ratio. The paper presents the results of comparison of the measurement method from the point of view of their admissible relative analytical errors. The new method is suggested. Analysis of analytical error of estimation of this parameter allowed to recommend new method instead of standard method. A comparative analysis and shows that the analytical (relative) accuracy of the determination of this parameter new method on the order exceeds the widely-used standard method.
Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza
2018-01-01
The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.
Multi-analytical Approaches Informing the Risk of Sepsis
NASA Astrophysics Data System (ADS)
Gwadry-Sridhar, Femida; Lewden, Benoit; Mequanint, Selam; Bauer, Michael
Sepsis is a significant cause of mortality and morbidity and is often associated with increased hospital resource utilization, prolonged intensive care unit (ICU) and hospital stay. The economic burden associated with sepsis is huge. With advances in medicine, there are now aggressive goal oriented treatments that can be used to help these patients. If we were able to predict which patients may be at risk for sepsis we could start treatment early and potentially reduce the risk of mortality and morbidity. Analytic methods currently used in clinical research to determine the risk of a patient developing sepsis may be further enhanced by using multi-modal analytic methods that together could be used to provide greater precision. Researchers commonly use univariate and multivariate regressions to develop predictive models. We hypothesized that such models could be enhanced by using multiple analytic methods that together could be used to provide greater insight. In this paper, we analyze data about patients with and without sepsis using a decision tree approach and a cluster analysis approach. A comparison with a regression approach shows strong similarity among variables identified, though not an exact match. We compare the variables identified by the different approaches and draw conclusions about the respective predictive capabilities,while considering their clinical significance.
Recursive linearization of multibody dynamics equations of motion
NASA Technical Reports Server (NTRS)
Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
EDXRF as an alternative method for multielement analysis of tropical soils and sediments.
Fernández, Zahily Herrero; Dos Santos Júnior, José Araújo; Dos Santos Amaral, Romilton; Alvarez, Juan Reinaldo Estevez; da Silva, Edvane Borges; De França, Elvis Joacir; Menezes, Rômulo Simões Cezar; de Farias, Emerson Emiliano Gualberto; do Nascimento Santos, Josineide Marques
2017-08-10
The quality assessment of tropical soils and sediments is still under discussion, with efforts being made on the part of governmental agencies to establish reference values. Energy dispersive X-ray fluorescence (EDXRF) is a potential analytical technique for quantifying diverse chemical elements in geological material without chemical treatment, primarily when it is performed at an appropriate metrological level. In this work, analytical curves were obtained by means of the analysis of geological reference materials (RMs), which allowed for the researchers to draw a comparison among the sources of analytical uncertainty. After having determined the quality assurance of the analytical procedure, the EDXRF method was applied to determine chemical elements in soils from the state of Pernambuco, Brazil. The regression coefficients of the analytical curves used to determine Al, Ca, Fe, K, Mg, Mn, Ni, Pb, Si, Sr, Ti, and Zn were higher than 0.99. The quality of the analytical procedure was demonstrated at a 95% confidence level, in which the estimated analytical uncertainties agreed with those from the RM's certificates of analysis. The analysis of diverse geological samples from Pernambuco indicated higher concentrations of Ni and Zn in sugarcane, with maximum values of 41 mg kg - 1 and 118 mg kg - 1 , respectively, and agricultural areas (41 mg kg - 1 and 127 mg kg - 1 , respectively). The trace element Sr was mainly enriched in urban soils with values of 400 mg kg - 1 . According to the results, the EDXRF method was successfully implemented, providing some chemical tracers for the quality assessment of tropical soils and sediments.
Analytical and numerical treatment of drift-tearing modes in plasma slab
NASA Astrophysics Data System (ADS)
Mirnov, V. V.; Hegna, C. C.; Sovinec, C. R.; Howell, E. C.
2016-10-01
Two-fluid corrections to linear tearing modes includes 1) diamagnetic drifts that reduce the growth rate and 2) electron and ion decoupling on short scales that can lead to fast reconnection. We have recently developed an analytical model that includes effects 1) and 2) and important contribution from finite electron parallel thermal conduction. Both the tendencies 1) and 2) are confirmed by an approximate analytic dispersion relation that is derived using a perturbative approach of small ion-sound gyroradius ρs. This approach is only valid at the beginning of the transition from the collisional to semi-collisional regimes. Further analytical and numerical work is performed to cover the full interval of ρs connecting these two limiting cases. Growth rates are computed from analytic theory with a shooting method. They match the resistive MHD regime with the dispersion relations known at asymptotically large ion-sound gyroradius. A comparison between this analytical treatment and linear numerical simulations using the NIMROD code with cold ions and hot electrons in plasma slab is reported. The material is based on work supported by the U.S. DOE and NSF.
Dhawan, Anuj; Norton, Stephen J; Gerhold, Michael D; Vo-Dinh, Tuan
2009-06-08
This paper describes a comparative study of finite-difference time-domain (FDTD) and analytical evaluations of electromagnetic fields in the vicinity of dimers of metallic nanospheres of plasmonics-active metals. The results of these two computational methods, to determine electromagnetic field enhancement in the region often referred to as "hot spots" between the two nanospheres forming the dimer, were compared and a strong correlation observed for gold dimers. The analytical evaluation involved the use of the spherical-harmonic addition theorem to relate the multipole expansion coefficients between the two nanospheres. In these evaluations, the spacing between two nanospheres forming the dimer was varied to obtain the effect of nanoparticle spacing on the electromagnetic fields in the regions between the nanostructures. Gold and silver were the metals investigated in our work as they exhibit substantial plasmon resonance properties in the ultraviolet, visible, and near-infrared spectral regimes. The results indicate excellent correlation between the two computational methods, especially for gold nanosphere dimers with only a 5-10% difference between the two methods. The effect of varying the diameters of the nanospheres forming the dimer, on the electromagnetic field enhancement, was also studied.
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Baumeister, Joseph F.
1994-01-01
An analytical procedure is presented, called the modal element method, that combines numerical grid based algorithms with eigenfunction expansions developed by separation of variables. A modal element method is presented for solving potential flow in a channel with two-dimensional cylindrical like obstacles. The infinite computational region is divided into three subdomains; the bounded finite element domain, which is characterized by the cylindrical obstacle and the surrounding unbounded uniform channel entrance and exit domains. The velocity potential is represented approximately in the grid based domain by a finite element solution and is represented analytically by an eigenfunction expansion in the uniform semi-infinite entrance and exit domains. The calculated flow fields are in excellent agreement with exact analytical solutions. By eliminating the grid surrounding the obstacle, the modal element method reduces the numerical grid size, employs a more precise far field boundary condition, as well as giving theoretical insight to the interaction of the obstacle with the mean flow. Although the analysis focuses on a specific geometry, the formulation is general and can be applied to a variety of problems as seen by a comparison to companion theories in aeroacoustics and electromagnetics.
Faller, Maximilian; Wilhelmsson, Peter; Kjelland, Vivian; Andreassen, Åshild; Dargis, Rimtas; Quarsten, Hanne; Dessau, Ram; Fingerle, Volker; Margos, Gabriele; Noraas, Sølvi; Ornstein, Katharina; Petersson, Ann-Cathrine; Matussek, Andreas; Lindgren, Per-Eric; Henningsson, Anna J.
2017-01-01
Introduction Lyme borreliosis (LB) is the most common tick transmitted disease in Europe. The diagnosis of LB today is based on the patient´s medical history, clinical presentation and laboratory findings. The laboratory diagnostics are mainly based on antibody detection, but in certain conditions molecular detection by polymerase chain reaction (PCR) may serve as a complement. Aim The purpose of this study was to evaluate the analytical sensitivity, analytical specificity and concordance of eight different real-time PCR methods at five laboratories in Sweden, Norway and Denmark. Method Each participating laboratory was asked to analyse three different sets of samples (reference panels; all blinded) i) cDNA extracted and transcribed from water spiked with cultured Borrelia strains, ii) cerebrospinal fluid spiked with cultured Borrelia strains, and iii) DNA dilution series extracted from cultured Borrelia and relapsing fever strains. The results and the method descriptions of each laboratory were systematically evaluated. Results and conclusions The analytical sensitivities and the concordance between the eight protocols were in general high. The concordance was especially high between the protocols using 16S rRNA as the target gene, however, this concordance was mainly related to cDNA as the type of template. When comparing cDNA and DNA as the type of template the analytical sensitivity was in general higher for the protocols using DNA as template regardless of the use of target gene. The analytical specificity for all eight protocols was high. However, some protocols were not able to detect Borrelia spielmanii, Borrelia lusitaniae or Borrelia japonica. PMID:28937997
Preliminary topical report on comparison reactor disassembly calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaughlin, T.P.
1975-11-01
Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2- POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherentmore » in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident. (auth)« less
NASA Technical Reports Server (NTRS)
Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.
1983-01-01
Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.
NASA Astrophysics Data System (ADS)
Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.
1983-05-01
Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.
Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-01
This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.
NASA Astrophysics Data System (ADS)
Li, Qiang; Argatov, Ivan; Popov, Valentin L.
2018-04-01
A recent paper by Popov, Pohrt and Li (PPL) in Friction investigated adhesive contacts of flat indenters in unusual shapes using numerical, analytical and experimental methods. Based on that paper, we analyze some special cases for which analytical solutions are known. As in the PPL paper, we consider adhesive contact in the Johnson-Kendall-Roberts approximation. Depending on the energy balance, different upper and lower estimates are obtained in terms of certain integral characteristics of the contact area. The special cases of an elliptical punch as well as a system of two circular punches are considered. Theoretical estimations for the first critical force (force at which the detachment process begins) are confirmed by numerical simulations using the adhesive boundary element method. It is shown that simpler approximations for the pull-off force, based both on the Holm radius of contact and the contact area, substantially overestimate the maximum adhesive force.
Evaluation on Bending Properties of Biomaterial GUM Metal Meshed Plates for Bone Graft Applications
NASA Astrophysics Data System (ADS)
Suzuki, Hiromichi; He, Jianmei
2017-11-01
There are three bone graft methods for bone defects caused by diseases such as cancer and accident injuries: Autogenous bone grafts, Allografts and Artificial bone grafts. In this study, meshed GUM Metal plates with lower elasticity, high strength and high biocompatibility are introduced to solve the over stiffness & weight problems of ready-used metal implants. Basic mesh shapes are designed and applied to GUM Metal plates using 3D CAD modeling tools. Bending properties of prototype meshed GUM Metal plates are evaluated experimentally and analytically. Meshed plate specimens with 180°, 120° and 60° axis-symmetrical types were fabricated for 3-point bending tests. The pseudo bending elastic moduli of meshed plate specimens obtained from 3-point bending test are ranged from 4.22 GPa to 16.07 GPa, within the elasticity range of natural cortical bones from 2.0 GPa to 30.0 GPa. Analytical approach method is validated by comparison with experimental and analytical results for evaluation on bending property of meshed plates.
Reverse phase protein microarrays: fluorometric and colorimetric detection.
Gallagher, Rosa I; Silvestri, Alessandra; Petricoin, Emanuel F; Liotta, Lance A; Espina, Virginia
2011-01-01
The Reverse Phase Protein Microarray (RPMA) is an array platform used to quantitate proteins and their posttranslationally modified forms. RPMAs are applicable for profiling key cellular signaling pathways and protein networks, allowing direct comparison of the activation state of proteins from multiple samples within the same array. The RPMA format consists of proteins immobilized directly on a nitrocellulose substratum. The analyte is subsequently probed with a primary antibody and a series of reagents for signal amplification and detection. Due to the diversity, low concentration, and large dynamic range of protein analytes, RPMAs require stringent signal amplification methods, high quality image acquisition, and software capable of precisely analyzing spot intensities on an array. Microarray detection strategies can be either fluorescent or colorimetric. The choice of a detection system depends on (a) the expected analyte concentration, (b) type of microarray imaging system, and (c) type of sample. The focus of this chapter is to describe RPMA detection and imaging using fluorescent and colorimetric (diaminobenzidine (DAB)) methods.
Analytic second derivatives of the energy in the fragment molecular orbital method
NASA Astrophysics Data System (ADS)
Nakata, Hiroya; Nagata, Takeshi; Fedorov, Dmitri G.; Yokojima, Satoshi; Kitaura, Kazuo; Nakamura, Shinichiro
2013-04-01
We developed the analytic second derivatives of the energy for the fragment molecular orbital (FMO) method. First we derived the analytic expressions and then introduced some approximations related to the first and second order coupled perturbed Hartree-Fock equations. We developed a parallel program for the FMO Hessian with approximations in GAMESS and used it to calculate infrared (IR) spectra and Gibbs free energies and to locate the transition states in SN2 reactions. The accuracy of the Hessian is demonstrated in comparison to ab initio results for polypeptides and a water cluster. By using the two residues per fragment division, we achieved the accuracy of 3 cm-1 in the reduced mean square deviation of vibrational frequencies from ab initio for all three polyalanine isomers, while the zero point energy had the error not exceeding 0.3 kcal/mol. The role of the secondary structure on IR spectra, zero point energies, and Gibbs free energies is discussed.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
Toxicologic evaluation of analytes from Tank 241-C-103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahlum, D.D.; Young, J.Y.; Weller, R.E.
1994-11-01
Westinghouse Hanford Company requested PNL to assemble a toxicology review panel (TRP) to evaluate analytical data compiled by WHC, and provide advice concerning potential health effects associated with exposure to tank-vapor constituents. The team`s objectives would be to (1) review procedures used for sampling vapors from tanks, (2) identify constituents in tank-vapor samples that could be related to symptoms reported by workers, (3) evaluate the toxicological implications of those constituents by comparison to establish toxicological databases, (4) provide advice for additional analytical efforts, and (5) support other activities as requested by WHC. The TRP represents a wide range of expertise,more » including toxicology, industrial hygiene, and occupational medicine. The TRP prepared a list of target analytes that chemists at the Oregon Graduate Institute/Sandia (OGI), Oak Ridge National Laboratory (ORNL), and PNL used to establish validated methods for quantitative analysis of head-space vapors from Tank 241-C-103. this list was used by the analytical laboratories to develop appropriate analytical methods for samples from Tank 241-C-103. Target compounds on the list included acetone, acetonitrile, ammonia, benzene, 1, 3-butadiene, butanal, n-butanol, hexane, 2-hexanone, methylene chloride, nitric oxide, nitrogen dioxide, nitrous oxide, dodecane, tridecane, propane nitrile, sulfur oxide, tributyl phosphate, and vinylidene chloride. The TRP considered constituent concentrations, current exposure limits, reliability of data relative to toxicity, consistency of the analytical data, and whether the material was carcinogenic or teratogenic. A final consideration in the analyte selection process was to include representative chemicals for each class of compounds found.« less
NASA Astrophysics Data System (ADS)
Pomata, Donatella; Di Filippo, Patrizia; Riccardi, Carmela; Buiarelli, Francesca; Gallo, Valentina
2014-02-01
Organic component of airborne particulate matter originates from both natural and anthropogenic sources whose contributions can be identified through the analysis of chemical markers. The validation of analytical methods for analysis of compounds used as chemical markers is of great importance especially if they must be determined in rather complex matrices. Currently, standard reference materials (SRM) with certified values for all those analytes are not available. In this paper, we report a method for the simultaneous determination of levoglucosan and xylitol as tracers for biomass burning emissions, and arabitol, mannitol and ergosterol as biomarkers for airborne fungi in SRM 1649a, by GC/MS. Their quantitative analysis in SRM 1649a was carried out using both internal standard calibration curves and standard addition method. A matrix effect was observed for all analytes, minor for levoglucosan and major for polyols and ergosterol. The results related to levoglucosan around 160 μg g-1 agreed with those reported by other authors, while no comparison was possible for xylitol (120 μg g-1), arabitol (15 μg g-1), mannitol (18 μg g-1), and ergosterol (0.5 μg g-1). The analytical method used for SRM 1649a was also applied to PM10 samples collected in Rome during four seasonal sampling campaigns. The ratios between annual analyte concentrations in PM10 samples and in SRM 1649a were of the same order of magnitude although particulate matter samples analyzed were collected in two different sites and periods.
Derpmann, Valerie; Mueller, David; Bejan, Iustinian; Sonderfeld, Hannah; Wilberscheid, Sonja; Koppmann, Ralf; Brockmann, Klaus J; Benter, Thorsten
2014-03-01
We report on a novel method for atmospheric pressure ionization of compounds with elevated electron affinity (e.g., nitroaromatic compounds) or gas phase acidity (e.g., phenols), respectively. The method is based on the generation of thermal electrons by the photo-electric effect, followed by electron capture of oxygen when air is the gas matrix yielding O2(-) or of the analyte directly with nitrogen as matrix. Charge transfer or proton abstraction by O2(-) leads to the ionization of the analytes. The interaction of UV-light with metals is a clean method for the generation of thermal electrons at atmospheric pressure. Furthermore, only negative ions are generated and neutral radical formation is minimized, in contrast to discharge- or dopant assisted methods. Ionization takes place inside the transfer capillary of the mass spectrometer leading to comparably short transfer times of ions to the high vacuum region of the mass spectrometer. This strongly reduces ion transformation processes, resulting in mass spectra that more closely relate to the neutral analyte distribution. cAPECI is thus a soft and selective ionization method with detection limits in the pptV range. In comparison to standard ionization methods (e.g., PTR), cAPECI is superior with respect to both selectivity and achievable detection limits. cAPECI demonstrates to be a promising ionization method for applications in relevant fields as, for example, explosives detection and atmospheric chemistry.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.
NASA Technical Reports Server (NTRS)
Prust, H. W., Jr.
1971-01-01
The results of an analytical study to determine the effect of changes in the amount, velocity, injection location, injection angle, and temperature of coolant flow on blade row performance are presented. The results show that the change in output of a cooled turbine blade row relative to the specific output of the uncooled blade row can be positive, negative, or zero. Comparisons between the analytical results and experimental results for four different cases of coolant discharge, all at a coolant temperature ratio of unity, show good agreement for three cases and rather poor agreement for the other. To further test the validity of the method, more experimental data is needed, particularly at different coolant temperature ratios.
Chlordane is a polychlorinated mixture that was used as a long-lived pesticide and now is considered a potential endocrine-disrupting compound. The Environmental Sciences Division is involved in modernizing methods for a number of analytes that are potential target substances for...
Using Digital Representations of Practical Production Work for Summative Assessment
ERIC Educational Resources Information Center
Newhouse, C. Paul
2014-01-01
This paper presents the findings of the first phase of a three-year study investigating the efficacy of the digitisation of creative practical work as digital portfolios for the purposes of high-stakes summative assessment. At the same time the paired comparisons method of scoring was tried as an alternative to analytical rubric-based marking…
USDA-ARS?s Scientific Manuscript database
Introduction – The diversity of structure and, particularly,stereochemical variation of the dehydropyrrolizidine alkaloids can present challenges for analysis and the isolation of pure compounds for the preparation of analytical standards and for toxicology studies. Objective – To investigate method...
SSD for R: A Comprehensive Statistical Package to Analyze Single-System Data
ERIC Educational Resources Information Center
Auerbach, Charles; Schudrich, Wendy Zeitlin
2013-01-01
The need for statistical analysis in single-subject designs presents a challenge, as analytical methods that are applied to group comparison studies are often not appropriate in single-subject research. "SSD for R" is a robust set of statistical functions with wide applicability to single-subject research. It is a comprehensive package…
A comparison between GO/aperture-field and physical-optics methods for offset reflectors
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1984-01-01
Both geometrical optics (GO)/aperture-field and physical-optics (PO) methods are used extensively in the diffraction analysis of offset parabolic and dual reflectors. An analytical/numerical comparative study is performed to demonstrate the limitations of the GO/aperture-field method for accurately predicting the sidelobe and null positions and levels. In particular, it is shown that for offset parabolic reflectors and for feeds located at the focal point, the predicted far-field patterns (amplitude) by the GO/aperture-field method will always be symmetric even in the offset plane. This, of course, is inaccurate for the general case and it is shown that the physical-optics method can result in asymmetric patterns for cases in which the feed is located at the focal point. Representative numerical data are presented and a comparison is made with available measured data.
Jedynak, Łukasz; Jedynak, Maria; Kossykowska, Magdalena; Zagrodzka, Joanna
2017-02-20
An HPLC method with UV detection and separation with the use of a C30 reversed phase analytical column for the determination of chemical purity and assay of menaquinone-7 (MK7) in one chromatographic run was developed. The method is superior to the methods published in the USP Monograph in terms of selectivity, sensitivity and accuracy, as well as time, solvent and sample consumption. The developed methodology was applied to MK7 samples of active pharmaceutical ingredient (API) purity, MK7 samples of lower quality and crude MK7 samples before purification. The comparison of the results revealed that the use of USP methodology could lead to serious overestimation (up to a few percent) of both purity and MK7 assay in menaquinone-7 samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluation of a new ultrasensitive assay for cardiac troponin I.
Casals, Gregori; Filella, Xavier; Bedini, Josep Lluis
2007-12-01
We evaluated the analytical and clinical performance of a new ultrasensitive cardiac troponin I assay (cTnI) on the ADVIA Centaur system (TnI-Ultra). The evaluation included the determination of detection limit, within-assay and between-assay variation and comparison with two other non-ultrasensitive methods. Moreover, cTnI was determined in 120 patients with acute chest pain with three methods. To evaluate the ability of the new method to detect MI earlier, it was assayed in 8 MI patients who first tested negative then positive by the other methods. The detection limit was 0.009 microg/L and imprecision was <10% at all concentrations evaluated. In comparison with two other methods, 10% of the anginas diagnosed were recategorized to MI. The ADVIA Centaur TnI-Ultra assay presented high reproducibility and high sensitivity. The use of the recommended lower cutpoint (0.044 microg/L) implied an increased and earlier identification of MI.
Schneider, M J; Donoghue, D J
2004-05-01
Regulatory monitoring for most antibiotic residues in edible poultry tissues is often accomplished with accurate, although expensive and technically demanding, chemical analytical techniques. The purpose of this study is to determine if a simple, inexpensive bioassay could detect fluoroquinolone (FQ) residues in chicken muscle above the FDA established tolerance (300 ppb) comparable to a liquid chromatography-fluorescencemass spectrometry(n) method. To produce incurred enrofloxacin (ENRO) tissues (where ENRO is incorporated into complex tissue matrices) for the method comparison, 40-d-old broilers (mixed sex) were orally dosed through drinking water for 3 d at the FDA-approved dose of ENRO (50 ppm). At the end of each day of the 3-d dosing period and for 3 d postdosing, birds were sacrificed and breast and thigh muscle collected and analyzed. Both methods were able to detect ENRO at and below the tolerance level in the muscle, with limits of detection of 26 ppb (bioassay), 0.1 ppb for ENRO, and 0.5 ppb for the ENRO metabolite, ciprofloxacin (liquid chromatography-fluorescence-mass spectrometry(n)). All samples that had violative levels of antibiotic were detected by the bioassay. These results support the use of this bioassay as a screening method for examining large numbers of samples for regulatory monitoring. Positive samples should then be examined by a more extensive method, such as liquid chromatography-fluorescence-mass spectrometry(n), to provide confirmation of the analyte.
Kitchen, Elizabeth; Bell, John D.; Reeve, Suzanne; Sudweeks, Richard R.; Bradshaw, William S.
2003-01-01
A large-enrollment, undergraduate cellular biology lecture course is described whose primary goal is to help students acquire skill in the interpretation of experimental data. The premise is that this kind of analytical reasoning is not intuitive for most people and, in the absence of hands-on laboratory experience, will not readily develop unless instructional methods and examinations specifically designed to foster it are employed. Promoting scientific thinking forces changes in the roles of both teacher and student. We describe didactic strategies that include directed practice of data analysis in a workshop format, active learning through verbal and written communication, visualization of abstractions diagrammatically, and the use of ancillary small-group mentoring sessions with faculty. The implications for a teacher in reducing the breadth and depth of coverage, becoming coach instead of lecturer, and helping students to diagnose cognitive weaknesses are discussed. In order to determine the efficacy of these strategies, we have carefully monitored student performance and have demonstrated a large gain in a pre- and posttest comparison of scores on identical problems, improved test scores on several successive midterm examinations when the statistical analysis accounts for the relative difficulty of the problems, and higher scores in comparison to students in a control course whose objective was information transfer, not acquisition of reasoning skills. A novel analytical index (student mobility profile) is described that demonstrates that this improvement was not random, but a systematic outcome of the teaching/learning strategies employed. An assessment of attitudes showed that, in spite of finding it difficult, students endorse this approach to learning, but also favor curricular changes that would introduce an analytical emphasis earlier in their training. PMID:14506506
Li, Frederick; Tice, Joseph; Musselman, Brian D; Hall, Adam B
2016-09-01
Improvised explosive devices (IEDs) are often used by terrorists and criminals to create public panic and destruction, necessitating rapid investigative information. However, backlogs in many forensic laboratories resulting in part from time-consuming GC-MS and LC-MS techniques prevent prompt analytical information. Direct analysis in real time - mass spectrometry (DART-MS) is a promising analytical technique that can address this challenge in the forensic science community by permitting rapid trace analysis of energetic materials. Therefore, we have designed a qualitative analytical approach that utilizes novel sorbent-coated wire mesh and dynamic headspace concentration to permit the generation of information rich chemical attribute signatures (CAS) for trace energetic materials in smokeless powder with DART-MS. Sorbent-coated wire mesh improves the overall efficiency of capturing trace energetic materials in comparison to swabbing or vacuuming. Hodgdon Lil' Gun smokeless powder was used to optimize the dynamic headspace parameters. This method was compared to traditional GC-MS methods and validated using the NIST RM 8107 smokeless powder reference standard. Additives and energetic materials, notably nitroglycerin, were rapidly and efficiently captured by the Carbopack X wire mesh, followed by detection and identification using DART-MS. This approach has demonstrated the capability of generating comparable results with significantly reduced analysis time in comparison to GC-MS. All targeted components that can be detected by GC-MS were detected by DART-MS in less than a minute. Furthermore, DART-MS offers the advantage of detecting targeted analytes that are not amenable to GC-MS. The speed and efficiency associated with both the sample collection technique and DART-MS demonstrate an attractive and viable potential alternative to conventional techniques. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
NASA Astrophysics Data System (ADS)
Zhang, Jinfang; Zheng, Kuan; Liu, Jun; Huang, Xinting
2018-02-01
In order to support North and West China’s RE (RE) development and enhance accommodation in reasonable high level, HVDC’s traditional operation curves need some change to follow the output characteristic of RE, which helps to shrink curtailment electricity and curtailment ratio of RE. In this paper, an economic benefit analysis method based on production simulation (PS) and Analytic hierarchy process (AHP) has been proposed. PS is the basic tool to analyze chosen power system operation situation, and AHP method could give a suitable comparison result among many candidate schemes. Based on four different transmission curve combinations, related economic benefit has been evaluated by PS and AHP. The results and related index have shown the efficiency of suggested method, and finally it has been validated that HVDC operation curve in following RE output mode could have benefit in decreasing RE curtailment level and improving economic operation.
A Comparison of Analytical and Data Preprocessing Methods for Spectral Fingerprinting
LUTHRIA, DEVANAND L.; MUKHOPADHYAY, SUDARSAN; LIN, LONG-ZE; HARNLY, JAMES M.
2013-01-01
Spectral fingerprinting, as a method of discriminating between plant cultivars and growing treatments for a common set of broccoli samples, was compared for six analytical instruments. Spectra were acquired for finely powdered solid samples using Fourier transform infrared (FT-IR) and Fourier transform near-infrared (NIR) spectrometry. Spectra were also acquired for unfractionated aqueous methanol extracts of the powders using molecular absorption in the ultraviolet (UV) and visible (VIS) regions and mass spectrometry with negative (MS−) and positive (MS+) ionization. The spectra were analyzed using nested one-way analysis of variance (ANOVA) and principal component analysis (PCA) to statistically evaluate the quality of discrimination. All six methods showed statistically significant differences between the cultivars and treatments. The significance of the statistical tests was improved by the judicious selection of spectral regions (IR and NIR), masses (MS+ and MS−), and derivatives (IR, NIR, UV, and VIS). PMID:21352644
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevolo, A.J.; Kjartanson, B.H.; Wonder, J.D.
1996-03-01
The goal of the Ames Expedited Site Characterization (ESC) project is to evaluate and promote both innovative technologies (IT) and state-of-the-practice technologies (SOPT) for site characterization and monitoring. In April and May 1994, the ESC project conducted site characterization, technology comparison, and stakeholder demonstration activities at a former manufactured gas plant (FMGP) owned by Iowa Electric Services (IES) Utilities, Inc., in Marshalltown, Iowa. Three areas of technology were fielded at the Marshalltown FMGP site: geophysical, analytical and data integration. The geophysical technologies are designed to assess the subsurface geological conditions so that the location, fate and transport of the targetmore » contaminants may be assessed and forecasted. The analytical technologies/methods are designed to detect and quantify the target contaminants. The data integration technology area consists of hardware and software systems designed to integrate all the site information compiled and collected into a conceptual site model on a daily basis at the site; this conceptual model then becomes the decision-support tool. Simultaneous fielding of different methods within each of the three areas of technology provided data for direct comparison of the technologies fielded, both SOPT and IT. This document reports the results of the site characterization, technology comparison, and ESC demonstration activities associated with the Marshalltown FMGP site. 124 figs., 27 tabs.« less
Engineering of a miniaturized, robotic clinical laboratory
Nourse, Marilyn B.; Engel, Kate; Anekal, Samartha G.; Bailey, Jocelyn A.; Bhatta, Pradeep; Bhave, Devayani P.; Chandrasekaran, Shekar; Chen, Yutao; Chow, Steven; Das, Ushati; Galil, Erez; Gong, Xinwei; Gessert, Steven F.; Ha, Kevin D.; Hu, Ran; Hyland, Laura; Jammalamadaka, Arvind; Jayasurya, Karthik; Kemp, Timothy M.; Kim, Andrew N.; Lee, Lucie S.; Liu, Yang Lily; Nguyen, Alphonso; O'Leary, Jared; Pangarkar, Chinmay H.; Patel, Paul J.; Quon, Ken; Ramachandran, Pradeep L.; Rappaport, Amy R.; Roy, Joy; Sapida, Jerald F.; Sergeev, Nikolay V.; Shee, Chandan; Shenoy, Renuka; Sivaraman, Sharada; Sosa‐Padilla, Bernardo; Tran, Lorraine; Trent, Amanda; Waggoner, Thomas C.; Wodziak, Dariusz; Yuan, Amy; Zhao, Peter; Holmes, Elizabeth A.
2018-01-01
Abstract The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with three components: a hardware platform (ie, the miniLab) that performs preanalytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay‐configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bidirectionally with the miniLab to manage assay‐specific protocols and analyze, store, and report results (i.e., the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/ml. For our anti‐herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration‐cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations. PMID:29376134
Engineering of a miniaturized, robotic clinical laboratory.
Nourse, Marilyn B; Engel, Kate; Anekal, Samartha G; Bailey, Jocelyn A; Bhatta, Pradeep; Bhave, Devayani P; Chandrasekaran, Shekar; Chen, Yutao; Chow, Steven; Das, Ushati; Galil, Erez; Gong, Xinwei; Gessert, Steven F; Ha, Kevin D; Hu, Ran; Hyland, Laura; Jammalamadaka, Arvind; Jayasurya, Karthik; Kemp, Timothy M; Kim, Andrew N; Lee, Lucie S; Liu, Yang Lily; Nguyen, Alphonso; O'Leary, Jared; Pangarkar, Chinmay H; Patel, Paul J; Quon, Ken; Ramachandran, Pradeep L; Rappaport, Amy R; Roy, Joy; Sapida, Jerald F; Sergeev, Nikolay V; Shee, Chandan; Shenoy, Renuka; Sivaraman, Sharada; Sosa-Padilla, Bernardo; Tran, Lorraine; Trent, Amanda; Waggoner, Thomas C; Wodziak, Dariusz; Yuan, Amy; Zhao, Peter; Young, Daniel L; Robertson, Channing R; Holmes, Elizabeth A
2018-01-01
The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with three components: a hardware platform (ie, the miniLab) that performs preanalytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay-configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bidirectionally with the miniLab to manage assay-specific protocols and analyze, store, and report results (i.e., the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/ml. For our anti-herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration-cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations.
Network meta-analysis, electrical networks and graph theory.
Rücker, Gerta
2012-12-01
Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Hewavitharana, Amitha K; Abu Kassim, Nur Sofiah; Shaw, Paul Nicholas
2018-06-08
With mass spectrometric detection in liquid chromatography, co-eluting impurities affect the analyte response due to ion suppression/enhancement. Internal standard calibration method, using co-eluting stable isotope labelled analogue of each analyte as the internal standard, is the most appropriate technique available to correct for these matrix effects. However, this technique is not without drawbacks, proved to be expensive because separate internal standard for each analyte is required, and the labelled compounds are expensive or require synthesising. Traditionally, standard addition method has been used to overcome the matrix effects in atomic spectroscopy and was a well-established method. This paper proposes the same for mass spectrometric detection, and demonstrates that the results are comparable to those with the internal standard method using labelled analogues, for vitamin D assay. As conventional standard addition procedure does not address procedural errors, we propose the inclusion of an additional internal standard (not co-eluting). Recoveries determined on human serum samples show that the proposed method of standard addition yields more accurate results than the internal standardisation using stable isotope labelled analogues. The precision of the proposed method of standard addition is superior to the conventional standard addition method. Copyright © 2018 Elsevier B.V. All rights reserved.
López-Guerra, Enrique A
2017-01-01
We explore the contact problem of a flat-end indenter penetrating intermittently a generalized viscoelastic surface, containing multiple characteristic times. This problem is especially relevant for nanoprobing of viscoelastic surfaces with the highly popular tapping-mode AFM imaging technique. By focusing on the material perspective and employing a rigorous rheological approach, we deliver analytical closed-form solutions that provide physical insight into the viscoelastic sources of repulsive forces, tip–sample dissipation and virial of the interaction. We also offer a systematic comparison to the well-established standard harmonic excitation, which is the case relevant for dynamic mechanical analysis (DMA) and for AFM techniques where tip–sample sinusoidal interaction is permanent. This comparison highlights the substantial complexity added by the intermittent-contact nature of the interaction, which precludes the derivation of straightforward equations as is the case for the well-known harmonic excitations. The derivations offered have been thoroughly validated through numerical simulations. Despite the complexities inherent to the intermittent-contact nature of the technique, the analytical findings highlight the potential feasibility of extracting meaningful viscoelastic properties with this imaging method. PMID:29114450
Analytical and Experimental Vibration Analysis of a Faulty Gear System.
1994-10-01
Wigner - Ville Distribution ( WVD ) was used to give a comprehensive comparison of the predicted and...experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD’s ability to...of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.
Accelerated characterization of graphite/epoxy composites
NASA Technical Reports Server (NTRS)
Griffith, W. I.; Morris, D. H.; Brinson, H. F.
1980-01-01
A method to predict the long term compliance of unidirectional off-axis laminates from short term laboratory tests is presented. The method uses an orthotropic transformation equation and the time-stress-temperature superposition principle. Short term tests are used to construct master curves for two off-axis unidirectional laminates with fiber angles of 10 and 90 degrees. Analytical predictions of long term compliance for 30 and 60 degrees laminates are made. Comparisons with experimental data are also given.
Final report on mid-polarity analytes in food matrix: mid-polarity pesticides in tea
NASA Astrophysics Data System (ADS)
Sin, Della W. M.; Li, Hongmei; Wong, S. K.; Lo, M. F.; Wong, Y. L.; Wong, Y. C.; Mok, C. S.
2015-01-01
At the Paris meeting in April 2011, the CCQM Working Group on Organic Analysis (OAWG) agreed on a suite of Track A studies meant to support the assessment of measurement capabilities needed for the delivery of measurement services within the scope of the OAWG Terms of Reference. One of the studies discussed and agreed upon for the suite of ten Track A studies that support the 5-year plan of the CCQM Core Competence assessment was CCQM-K95 'Mid-Polarity Analytes in Food Matrix: Mid-Polarity Pesticides in Tea'. This key comparison was co-organized by the Government Laboratory of Hong Kong Special Administrative Region (GL) and the National Institute of Metrology, China (NIM). To allow wider participation, a pilot study, CCQM-P136, was run in parallel. Participants' capabilities in measuring mid-polarity analytes in food matrix were demonstrated through this key comparison. Most of the participating NMIs/DIs successfully measured beta-endosulfan and endosulfan sulphate in the sample, however, there is room for further improvement for some participants. This key comparison involved not only extraction, clean-up, analytical separation and selective detection of the analytes in a complex food matrix, but also the pre-treatment procedures of the material before the extraction process. The problem of incomplete extraction of the incurred analytes from the sample matrix may not be observed simply by using spike recovery. The relative standard deviations for the data included in the KCRV calculation in this key comparison were less than 7 % which was acceptable given the complexity of the matrix, the level of the analytes and the complexity of the analytical procedure. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Quantifying construction and demolition waste: An analytical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin
2014-09-15
Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Aircraft Dynamic Modeling in Turbulence
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunninham, Kevin
2012-01-01
A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.
NASA Astrophysics Data System (ADS)
Bisegna, Paolo; Caselli, Federica
2008-06-01
This paper presents a simple analytical expression for the effective complex conductivity of a periodic hexagonal arrangement of conductive circular cylinders embedded in a conductive matrix, with interfaces exhibiting a capacitive impedance. This composite material may be regarded as an idealized model of a biological tissue comprising tubular cells, such as skeletal muscle. The asymptotic homogenization method is adopted, and the corresponding local problem is solved by resorting to Weierstrass elliptic functions. The effectiveness of the present analytical result is proved by convergence analysis and comparison with finite-element solutions and existing models.
NASA Technical Reports Server (NTRS)
Housner, J. M.; Anderson, M.; Belvin, W.; Horner, G.
1985-01-01
Dynamic analysis of large space antenna systems must treat the deployment as well as vibration and control of the deployed antenna. Candidate computer programs for deployment dynamics, and issues and needs for future program developments are reviewed. Some results for mast and hoop deployment are also presented. Modeling of complex antenna geometry with conventional finite element methods and with repetitive exact elements is considered. Analytical comparisons with experimental results for a 15 meter hoop/column antenna revealed the importance of accurate structural properties including nonlinear joints. Slackening of cables in this antenna is also a consideration. The technology of designing actively damped structures through analytical optimization is discussed and results are presented.
Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph
2003-06-01
A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.
Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Bryanton, Janet; Bigsby, Kathryn; Shaw, R Anthony
2018-02-20
Attenuated total reflectance infrared (ATR-IR) spectroscopy is a simple, rapid and cost-effective method for the analysis of serum. However, the complex nature of serum remains a limiting factor to the reliability of this method. We investigated the benefits of coupling the centrifugal ultrafiltration with ATR-IR spectroscopy for quantification of human serum IgA concentration. Human serum samples (n = 196) were analyzed for IgA using an immunoturbidimetric assay. ATR-IR spectra were acquired for whole serum samples and for the retentate (residue) reconstituted with saline following 300 kDa centrifugal ultrafiltration. IR-based analytical methods were developed for each of the two spectroscopic datasets, and the accuracy of each of the two methods compared. Analytical methods were based upon partial least squares regression (PLSR) calibration models - one with 5-PLS factors (for whole serum) and the second with 9-PLS factors (for the reconstituted retentate). Comparison of the two sets of IR-based analytical results to reference IgA values revealed improvements in the Pearson correlation coefficient (from 0.66 to 0.76), and the root mean squared error of prediction in IR-based IgA concentrations (from 102 to 79 mg/dL) for the ultrafiltration retentate-based method as compared to the method built upon whole serum spectra. Depleting human serum low molecular weight proteins using a 300 kDa centrifugal filter thus enhances the accuracy IgA quantification by ATR-IR spectroscopy. Further evaluation and optimization of this general approach may ultimately lead to routine analysis of a range of high molecular-weight analytical targets that are otherwise unsuitable for IR-based analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; ...
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
Online analysis of chlorine stable isotopes in chlorinated ethylenes: an inter-laboratory study
NASA Astrophysics Data System (ADS)
Bernstein, Anat; Shouakar-Stash, Orfan; Hunkeler, Daniel; Sakaguchi-Söder, Kaori; Laskov, Christine; Aravena, Ramon; Elsner, Martin
2010-05-01
In the last decade, compound-specific stable isotopes analysis of groundwater pollutants became an important tool to identify different sources of the same pollutant and for tracking natural attenuating processes in the sub-surface. It has been shown that trends in the isotopic composition of the target compounds can shed light on in-situ processes that are otherwise difficult to track. Analytical methods of carbon, nitrogen and hydrogen were established and are by now frequently used for a variety of organic pollutants. Yet, the motivation of introducing analytical methods for new isotopes is emerging. This motivation is further enhanced, as advantages of using two or more stable isotopes for gaining better insight on degradation pathways are well accepted. One important element which demands the development of appropriate analytical methods is chlorine, which is found in various groups of organic pollutants, among them the chlorinated ethylenes. Chlorinated ethylenes are considered as high priority environmental pollutants, and the development of suitable chlorine isotope methods for this group of pollutants is highly desired. Ideally, stable isotope techniques should have the capability to determine the isotopic composition of and individual target compound in a non-pure mixture, without the requirement of a laborious off-line treatment. Indeed, in the last years two different concepts for on-line chlorine isotope analysis methods were introduced, by using either a standard quadrapole GC/MS (Sakaguchi-Söder et al., 2007) or by using a GC/IRMS (Shouakar-Stash et al., 2006). We present a comparison of the performances of two concepts, carried out in five different laboratories: Waterloo (GC/IRMS), Neuchâtel (GC/MS), Darmstadt (GC/MS), Tübingen (GC/MS) and Munich (GC/IRMS). This comparison was performed on pure trichloroethylene and dichloroethylene products of different manufactures, as well as trichloroethylene and dichloroethylene samples that were exposed to biodegradation. This study sets standards for further application of these techniques to distinguish sources and track degradation processes in the sub-surface.
Yebra, M Carmen
2012-01-01
A simple and rapid analytical method was developed for the determination of iron, manganese, and zinc in soluble solid samples. The method is based on continuous ultrasonic water dissolution of the sample (5-30 mg) at room temperature followed by flow injection flame atomic absorption spectrometric determination. A good precision of the whole procedure (1.2-4.6%) and a sample throughput of ca. 25 samples h(-1) were obtained. The proposed green analytical method has been successfully applied for the determination of iron, manganese, and zinc in soluble solid food samples (soluble cocoa and soluble coffee) and pharmaceutical preparations (multivitamin tablets). The ranges of concentrations found were 21.4-25.61 μg g(-1) for iron, 5.74-18.30 μg g(-1) for manganese, and 33.27-57.90 μg g(-1) for zinc in soluble solid food samples and 3.75-9.90 μg g(-1) for iron, 0.47-5.05 μg g(-1) for manganese, and 1.55-15.12 μg g(-1) for zinc in multivitamin tablets. The accuracy of the proposed method was established by a comparison with the conventional wet acid digestion method using a paired t-test, indicating the absence of systematic errors.
See, Randolph B.; Schroder, LeRoy J.; Willoughby, Timothy C.
1988-01-01
During 1986, the U.S. Geological Survey operated three programs to provide external quality-assurance monitoring of the National Atmospheric Deposition Program and National Trends Network. An intersite-comparison program was used to assess the accuracy of onsite pH and specific-conductance determinations at quarterly intervals. The blind-audit program was used to assess the effect of routine sample handling on the precision and bias of program and network wet-deposition data. Analytical results from four laboratories, which routinely analyze wet-deposition samples, were examined to determine if differences existed between laboratory analytical results and to provide estimates of the analytical precision of each laboratory. An average of 78 and 89 percent of the site operators participating in the intersite-comparison met the network goals for pH and specific conductance. A comparison of analytical values versus actual values for samples submitted as part of the blind-audit program indicated that analytical values were slightly but significantly (a = 0.01) larger than actual values for pH, magnesium, sodium, and sulfate; analytical values for specific conductance were slightly less than actual values. The decreased precision in the analyses of blind-audit samples when compared to interlaboratory studies indicates that a large amount of uncertainty in network deposition data may be a result of routine field operations. The results of the interlaboratory comparison study indicated that the magnitude of the difference between laboratory analyses was small for all analytes. Analyses of deionized, distilled water blanks by participating laboratories indicated that the laboratories had difficulty measuring analyte concentrations near their reported detection limits. (USGS)
ERIC Educational Resources Information Center
Gustafsson, Jan-Eric
2013-01-01
In educational effectiveness research, it frequently has proven difficult to make credible inferences about cause and effect relations. The article first identifies the main categories of threats to valid causal inference from observational data, and discusses designs and analytic approaches which protect against them. With the use of data from 22…
NASA Technical Reports Server (NTRS)
Ogletree, G.; Coccoli, J.; Mckern, R.; Smith, M.; White, R.
1972-01-01
The ten candidate SIMS configurations were reduced to three in preparation for the final trade comparison. The report emphasizes subsystem design trades, star availability studies, data processing (smoothing) methods, and the analytical and simulation studies at subsystem and system levels from which candidate accuracy estimates will be presented.
Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.
Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E
2007-02-15
Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.
NASA Astrophysics Data System (ADS)
Lindner, T.; Bonebeau, S.; Drehmann, R.; Grund, T.; Pawlowski, L.; Lampke, T.
2016-03-01
In wire arc spraying, the raw material needs to exhibit sufficient formability and ductility in order to be processed. By using an electrically conductive, metallic sheath, it is also possible to handle non-conductive and/or brittle materials such as ceramics. In comparison to massive wire, a cored wire has a heterogeneous material distribution. Due to this fact and the complex thermodynamic processes during wire arc spraying, it is very difficult to predict the resulting chemical composition in the coating with sufficient accuracy. An Inconel 625 cored wire was used to investigate this issue. In a comparative study, the analytical results of the raw material were compared to arc sprayed coatings and droplets, which were remelted in an arc furnace under argon atmosphere. Energy-dispersive X-ray spectroscopy (EDX) and X-ray fluorescence (XRF) analysis were used to determine the chemical composition. The phase determination was performed by X-ray diffraction (XRD). The results were related to the manufacturer specifications and evaluated in respect to differences in the chemical composition. The comparison between the feedstock powder, the remelted droplets and the thermally sprayed coatings allows to evaluate the influence of the processing methods on the resulting chemical and phase composition.
The induction of mycotoxins by trichothecene producing Fusarium species.
Lowe, Rohan; Jubault, Mélanie; Canning, Gail; Urban, Martin; Hammond-Kosack, Kim E
2012-01-01
In recent years, many Fusarium species have emerged which now threaten the productivity and safety of small grain cereal crops worldwide. During floral infection and post-harvest on stored grains the Fusarium hyphae produce various types of harmful mycotoxins which subsequently contaminate food and feed products. This article focuses specifically on the induction and production of the type B sesquiterpenoid trichothecene mycotoxins. Methods are described which permit in liquid culture the small or large scale production and detection of deoxynivalenol (DON) and its various acetylated derivatives. A wheat (Triticum aestivum L.) ear inoculation assay is also explained which allows the direct comparison of mycotoxin production by species, chemotypes and strains with different growth rates and/or disease-causing abilities. Each of these methods is robust and can be used for either detailed time-course studies or end-point analyses. Various analytical methods are available to quantify the levels of DON, 3A-DON and 15A-DON. Some criteria to be considered when making selections between the different analytical methods available are briefly discussed.
Wang, Li-Li; Zhang, Yun-Bin; Sun, Xiao-Ya; Chen, Sui-Qing
2016-05-08
Establish a quantitative analysis of multi-components by the single marker (QAMS) method for quality evaluation and validate its feasibilities by the simultaneous quantitative assay of four main components in Linderae Reflexae Radix. Four main components of pinostrobin, pinosylvin, pinocembrin, and 3,5-dihydroxy-2-(1- p -mentheneyl)- trans -stilbene were selected as analytes to evaluate the quality by RP-HPLC coupled with a UV-detector. The method was evaluated by a comparison of the quantitative results between the external standard method and QAMS with a different HPLC system. The results showed that no significant differences were found in the quantitative results of the four contents of Linderae Reflexae Radix determined by the external standard method and QAMS (RSD <3%). The contents of four analytes (pinosylvin, pinocembrin, pinostrobin, and Reflexanbene I) in Linderae Reflexae Radix were determined by the single marker of pinosylvin. This fingerprint was the spectra determined by Shimadzu LC-20AT and Waters e2695 HPLC that were equipped with three different columns.
Analytical study of striated nozzle flow with small radius of curvature ratio throats
NASA Technical Reports Server (NTRS)
Norton, D. J.; White, R. E.
1972-01-01
An analytical method was developed which is capable of estimating the chamber and throat conditions in a nozzle with a low radius of curvature throat. The method was programmed using standard FORTRAN 4 language and includes chemical equilibrium calculation subprograms (modified NASA Lewis program CEC71) as an integral part. The method determines detailed and gross rocket characteristics in the presence of striated flows and gives detailed results for the motor chamber and throat plane with as many as 20 discrete zones. The method employs a simultaneous solution of the mass, momentum, and energy equations and allows propellant types, 0/F ratios, propellant distribution, nozzle geometry, and injection schemes to be varied so to predict spatial velocity, density, pressure, and other thermodynamic variable distributions in the chamber as well as the throat. Results for small radius of curvature have shown good comparison to experimental results. Both gaseous and liquid injection may be considered with frozen or equilibrium flow calculations.
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-01-18
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels.
NASA Astrophysics Data System (ADS)
Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam
2017-03-01
While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels. Dedicated to Professor Kankan Bhattacharyya.
Modification of a successive corrections objective analysis for improved higher order calculations
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1988-01-01
The use of objectively analyzed fields of meteorological data for the initialization of numerical prediction models and for complex diagnostic studies places the requirements upon the objective method that derivatives of the gridded fields be accurate and free from interpolation error. A modification was proposed for an objective analysis developed by Barnes that provides improvements in analysis of both the field and its derivatives. Theoretical comparisons, comparisons between analyses of analytical monochromatic waves, and comparisons between analyses of actual weather data are used to show the potential of the new method. The new method restores more of the amplitudes of desired wavelengths while simultaneously filtering more of the amplitudes of undesired wavelengths. These results also hold for the first and second derivatives calculated from the gridded fields. Greatest improvements were for the Laplacian of the height field; the new method reduced the variance of undesirable very short wavelengths by 72 percent. Other improvements were found in the divergence of the gridded wind field and near the boundaries of the field of data.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.
2008-01-01
Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.
Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer
2014-01-24
In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.
Bioassays as one of the Green Chemistry tools for assessing environmental quality: A review.
Wieczerzak, M; Namieśnik, J; Kudłak, B
2016-09-01
For centuries, mankind has contributed to irreversible environmental changes, but due to the modern science of recent decades, scientists are able to assess the scale of this impact. The introduction of laws and standards to ensure environmental cleanliness requires comprehensive environmental monitoring, which should also meet the requirements of Green Chemistry. The broad spectrum of Green Chemistry principle applications should also include all of the techniques and methods of pollutant analysis and environmental monitoring. The classical methods of chemical analyses do not always match the twelve principles of Green Chemistry, and they are often expensive and employ toxic and environmentally unfriendly solvents in large quantities. These solvents can generate hazardous and toxic waste while consuming large volumes of resources. Therefore, there is a need to develop reliable techniques that would not only meet the requirements of Green Analytical Chemistry, but they could also complement and sometimes provide an alternative to conventional classical analytical methods. These alternatives may be found in bioassays. Commercially available certified bioassays often come in the form of ready-to-use toxkits, and they are easy to use and relatively inexpensive in comparison with certain conventional analytical methods. The aim of this study is to provide evidence that bioassays can be a complementary alternative to classical methods of analysis and can fulfil Green Analytical Chemistry criteria. The test organisms discussed in this work include single-celled organisms, such as cell lines, fungi (yeast), and bacteria, and multicellular organisms, such as invertebrate and vertebrate animals and plants. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Braga, Jez Willian Batista; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Rufini, Iolanda Aparecida; Santos, Dário, Jr.; Krug, Francisco José
2010-01-01
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.
Modal method for Second Harmonic Generation in nanostructures
NASA Astrophysics Data System (ADS)
Héron, S.; Pardo, F.; Bouchon, P.; Pelouard, J.-L.; Haïdar, R.
2015-05-01
Nanophotonic devices show interesting features for nonlinear response enhancement but numerical tools are mandatory to fully determine their behaviour. To address this need, we present a numerical modal method dedicated to nonlinear optics calculations under the undepleted pump approximation. It is brie y explained in the frame of Second Harmonic Generation for both plane waves and focused beams. The nonlinear behaviour of selected nanostructures is then investigated to show comparison with existing analytical results and study the convergence of the code.
Accelerated characterization of graphite/epoxy composites
NASA Technical Reports Server (NTRS)
Griffith, W. I.; Morris, D. H.; Brinson, H. F.
1980-01-01
A method to predict the long-term compliance of unidirectional off-axis laminates from short-term laboratory tests is presented. The method uses an orthotropic transformation equation and the time-stress-temperature superposition principle. Short-term tests are used to construct master curves for two off-axis unidirectional laminates with fiber angles of 10 deg and 90 deg. In addition, analytical predictions of long-term compliance for 30 deg and 60 deg laminates are made. Comparisons with experimental data are also given.
Propeller flow visualization techniques
NASA Technical Reports Server (NTRS)
Stefko, G. L.; Paulovich, F. J.; Greissing, J. P.; Walker, E. D.
1982-01-01
Propeller flow visualization techniques were tested. The actual operating blade shape as it determines the actual propeller performance and noise was established. The ability to photographically determine the advanced propeller blade tip deflections, local flow field conditions, and gain insight into aeroelastic instability is demonstrated. The analytical prediction methods which are being developed can be compared with experimental data. These comparisons contribute to the verification of these improved methods and give improved capability for designing future advanced propellers with enhanced performance and noise characteristics.
Schuh, V.; Šír, J.; Galliová, J.; Švandová, E.
1966-01-01
A comparison of the weight and photometric methods of primary assay of BCG vaccine has been made, using a vaccine prepared in albumin-free medium but containing Tween 80. In the weight method, the bacteria were trapped on a membrane filter; for photometry a Pulfrich Elpho photometer and an instrument of Czech origin were used. The photometric results were the more precise, provided that the measurements were made within two days of completion of growth; after this time the optical density of the suspension began to decrease slowly. The lack of precision of the weighing method is probably due to the small weight of culture deposit (which was almost on the limit of accuracy of the analytical balance) and to difficulties in the manipulation of the ultrafilter. PMID:5335458
On-matrix derivatization extraction of chemical weapons convention relevant alcohols from soil.
Chinthakindi, Sridhar; Purohit, Ajay; Singh, Varoon; Dubey, D K; Pardasani, Deepak
2013-10-11
Present study deals with the on-matrix derivatization-extraction of aminoalcohols and thiodiglycols, which are important precursors and/or degradation products of VX analogues and vesicants class of chemical warfare agents (CWAs). The method involved hexamethyldisilazane (HMDS) mediated in situ silylation of analytes on the soil. Subsequent extraction and gas chromatography-mass spectrometry analysis of derivatized analytes offered better recoveries in comparison to the procedure recommended by the Organization for the Prohibition of Chemical Weapons (OPCW). Various experimental conditions such as extraction solvent, reagent and catalyst amount, reaction time and temperature were optimized. Best recoveries of analytes ranging from 45% to 103% were obtained with DCM solvent containing 5%, v/v HMDS and 0.01%, w/v iodine as catalyst. The limits of detection (LOD) and limit of quantification (LOQ) with selected analytes ranged from 8 to 277 and 21 to 665ngmL(-1), respectively, in selected ion monitoring mode. Copyright © 2013 Elsevier B.V. All rights reserved.
Bagnasco, Lucia; Cosulich, M Elisabetta; Speranza, Giovanna; Medini, Luca; Oliveri, Paolo; Lanteri, Silvia
2014-08-15
The relationships between sensory attribute and analytical measurements, performed by electronic tongue (ET) and near-infrared spectroscopy (NIRS), were investigated in order to develop a rapid method for the assessment of umami taste. Commercially available umami products and some aminoacids were submitted to sensory analysis. Results were analysed in comparison with the outcomes of analytical measurements. Multivariate exploratory analysis was performed by principal component analysis (PCA). Calibration models for prediction of the umami taste on the basis of ET and NIR signals were obtained using partial least squares (PLS) regression. Different approaches for merging data from the two different analytical instruments were considered. Both of the techniques demonstrated to provide information related with umami taste. In particular, ET signals showed the higher correlation with umami attribute. Data fusion was found to be slightly beneficial - not so significantly as to justify the coupled use of the two analytical techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analytical method for the effects of the asteroid belt on planetary orbits
NASA Technical Reports Server (NTRS)
Mayo, A. P.
1979-01-01
Analytic expressions are derived for the perturbation of planetary orbits due to a thick constant-density asteroid belt. The derivations include extensions and adaptations of Plakhov's (1968) analytic expressions for the perturbations in five of the orbital elements for closed orbits around Saturn's rings. The equations of Plakhov are modified to include the effect of ring thickness, and additional equations are derived for the perturbations in the sixth orbital element, the mean anomaly. The gravitational potential and orbital perturbations are derived for the asteroid belt with and without thickness, and for a hoop approximation to the belt. The procedures are also applicable to Saturn's rings and the newly discovered rings of Uranus. The effects of the asteroid belt thickness on the gravitational potential coefficients and the orbital motions are demonstrated. Comparisons between the Mars orbital perturbations obtained by using the analytic expressions and those obtained by numerical integration are discussed. The effects of the asteroid belt on earth-based ranging to Mars are also demonstrated.
Calculation of Thermally-Induced Displacements in Spherically Domed Ion Engine Grids
NASA Technical Reports Server (NTRS)
Soulas, George C.
2006-01-01
An analytical method for predicting the thermally-induced normal and tangential displacements of spherically domed ion optics grids under an axisymmetric thermal loading is presented. A fixed edge support that could be thermally expanded is used for this analysis. Equations for the displacements both normal and tangential to the surface of the spherical shell are derived. A simplified equation for the displacement at the center of the spherical dome is also derived. The effects of plate perforation on displacements and stresses are determined by modeling the perforated plate as an equivalent solid plate with modified, or effective, material properties. Analytical model results are compared to the results from a finite element model. For the solid shell, comparisons showed that the analytical model produces results that closely match the finite element model results. The simplified equation for the normal displacement of the spherical dome center is also found to accurately predict this displacement. For the perforated shells, the analytical solution and simplified equation produce accurate results for materials with low thermal expansion coefficients.
Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre
2015-11-01
When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, V.D.; Chen, T.D.; Conklin, J.C.
1998-11-15
The analytical solutions of heat exchanger effectiveness for four-row crcmilow, cross-countertlow and cross-paralleltlow have been derived in the recent study. The main objective of this study is to investigate the etlkct of heat exchawger tlow conllguration on thermal performance with refrigerant mixtures. Difference of heat exchanger effectiveness for all flow arrangements relative to an analytical many-row solution has been analyzed. A comparison of four-row cross cou~ltet-ilow heat exchanger effectiveness between analytical solutions and experimental data with water, R-22, and R-4 10A is presented.
Pourkhalili, Azin; Rahimi, Ebrahim
2013-01-01
Lamb meat is regarded as an important source of highly bioavailable iron (heme iron) in the Iranians diet. The main objective of this study is to evaluate the effect of traditional cooking methods on the iron changes in lamb meat. Four published experimental methods for the determination of heme iron were assessed analytically and statistically. Samples were selected from lambs' loin. Standard methods (AOAC) were used for proximate analysis. For measuring heme iron, the results of four experimental methods were compared regarding their compliance to Ferrozine method which was used for the determination of nonheme iron. Among three cooking methods, the lowest total iron and heme iron were found in boiling method. The heme iron proportions to the total iron in raw, boiled lamb meat and grilled, were counted as 65.70%, 67.75%, and 76.01%, receptively. Measuring the heme iron, the comparison of the methods in use showed that the method in which heme extraction solution was composed of 90% acetone, 18% water, and 2% hydrochloric acid was more appropriate and more correlated with the heme iron content calculated by the difference between total iron and nonheme iron. PMID:23737716
Pourkhalili, Azin; Mirlohi, Maryam; Rahimi, Ebrahim
2013-01-01
Lamb meat is regarded as an important source of highly bioavailable iron (heme iron) in the Iranians diet. The main objective of this study is to evaluate the effect of traditional cooking methods on the iron changes in lamb meat. Four published experimental methods for the determination of heme iron were assessed analytically and statistically. Samples were selected from lambs' loin. Standard methods (AOAC) were used for proximate analysis. For measuring heme iron, the results of four experimental methods were compared regarding their compliance to Ferrozine method which was used for the determination of nonheme iron. Among three cooking methods, the lowest total iron and heme iron were found in boiling method. The heme iron proportions to the total iron in raw, boiled lamb meat and grilled, were counted as 65.70%, 67.75%, and 76.01%, receptively. Measuring the heme iron, the comparison of the methods in use showed that the method in which heme extraction solution was composed of 90% acetone, 18% water, and 2% hydrochloric acid was more appropriate and more correlated with the heme iron content calculated by the difference between total iron and nonheme iron.
Ribozyme-mediated signal augmentation on a mass-sensitive biosensor.
Knudsen, Scott M; Lee, Joonhyung; Ellington, Andrew D; Savran, Cagri A
2006-12-20
Mass-based detection methods such as the quartz crystal microbalance (QCM) offer an attractive option to label-based methods; however the sensitivity is generally lower by comparison. In particular, low-molecular-weight analytes can be difficult to detect based on mass addition alone. In this communication, we present the use of effector-dependent ribozymes (aptazymes) as reagents for augmenting small ligand detection on a mass-sensitive device. Two distinct aptazymes were chosen: an L1-ligase-based aptazyme (L1-Rev), which is activated by a small peptide (MW approximately 2.4 kDa) from the HIV-1 Rev protein, and a hammerhead cleavase-based aptazyme (HH-theo3) activated by theophylline (MW = 180 Da). Aptazyme activity was observed in real time, and low-molecular-weight analyte detection has been successfully demonstrated with both aptazymes.
A finite element study of the EIDI system. [Electro-Impulse De-Icing System
NASA Technical Reports Server (NTRS)
Khatkhate, A. A.; Scavuzzo, R. J.; Chu, M. L.
1988-01-01
This paper presents a method for modeling the structural dynamics of an Electro-Impulse De-Icing System, using finite element analyses procedures. A guideline for building a representative finite element model is discussed. Modeling was done initially using four noded cubic elements, four noded isoparametric plate elements and eight noded isoparametric shell elements. Due to the size of the problem and due to the underestimation of shear stress results when compared to previous analytical work an approximate model was created to predict possible areas of shedding of ice. There appears to be good agreement with the test data provided by The Boeing Commercial Airplane Company. Thus these initial results of this method were found to be encouraging. Additional analytical work and comparison with experiment is needed in order to completely evaluate this approach.
Sert, Şenol
2013-07-01
A comparison method for the determination (without sample pre-concentration) of uranium in ore by inductively coupled plasma optical emission spectrometry (ICP-OES) has been performed. The experiments were conducted using three procedures: matrix matching, plasma optimization, and internal standardization for three emission lines of uranium. Three wavelengths of Sm were tested as internal standard for the internal standardization method. The robust conditions were evaluated using applied radiofrequency power, nebulizer argon gas flow rate, and sample uptake flow rate by considering the intensity ratio of the Mg(II) 280.270 nm and Mg(I) 285.213 nm lines. Analytical characterization of method was assessed by limit of detection and relative standard deviation values. The certificated reference soil sample IAEA S-8 was analyzed, and the uranium determination at 367.007 nm with internal standardization using Sm at 359.260 nm has been shown to improve accuracy compared with other methods. The developed method was used for real uranium ore sample analysis.
Tschandl, P; Kittler, H; Schmid, K; Zalaudek, I; Argenziano, G
2015-06-01
There are two strategies to approach the dermatoscopic diagnosis of pigmented skin tumours, namely the verbal-based analytic and the more visual-global heuristic method. It is not known if one or the other is more efficient in teaching dermatoscopy. To compare two teaching methods in short-term training of dermatoscopy to medical students. Fifty-seven medical students in the last year of the curriculum were given a 1-h lecture of either the heuristic- or the analytic-based teaching of dermatoscopy. Before and after this session, they were shown the same 50 lesions and asked to diagnose them and rate for chance of malignancy. Test lesions consisted of melanomas, basal cell carcinomas, nevi, seborrhoeic keratoses, benign vascular tumours and dermatofibromas. Performance measures were diagnostic accuracy regarding malignancy as measured by the area under the curves of receiver operating curves (range: 0-1), as well as per cent correct diagnoses (range: 0-100%). Diagnostic accuracy as well as per cent correct diagnoses increased by +0.21 and +32.9% (heuristic teaching) and +0.19 and +35.7% (analytic teaching) respectively (P for all <0.001). Neither for diagnostic accuracy (P = 0.585), nor for per cent correct diagnoses (P = 0.298) was a difference between the two groups. Short-term training of dermatoscopy to medical students allows significant improvement in diagnostic abilities. Choosing a heuristic or analytic method does not have an influence on this effect in short training using common pigmented skin lesions. © 2014 European Academy of Dermatology and Venereology.
Uzoka, Faith-Michael Emeka; Obot, Okure; Barker, Ken; Osuji, J
2011-07-01
The task of medical diagnosis is a complex one, considering the level vagueness and uncertainty management, especially when the disease has multiple symptoms. A number of researchers have utilized the fuzzy-analytic hierarchy process (fuzzy-AHP) methodology in handling imprecise data in medical diagnosis and therapy. The fuzzy logic is able to handle vagueness and unstructuredness in decision making, while the AHP has the ability to carry out pairwise comparison of decision elements in order to determine their importance in the decision process. This study attempts to do a case comparison of the fuzzy and AHP methods in the development of medical diagnosis system, which involves basic symptoms elicitation and analysis. The results of the study indicate a non-statistically significant relative superiority of the fuzzy technology over the AHP technology. Data collected from 30 malaria patients were used to diagnose using AHP and fuzzy logic independent of one another. The results were compared and found to covary strongly. It was also discovered from the results of fuzzy logic diagnosis covary a little bit more strongly to the conventional diagnosis results than that of AHP. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Kadavilpparampu, Afsal Mohammed; Al Lawati, Haider A J; Suliman, Fakhr Eldin O
2017-08-05
For the first time, the analytical figures of merit in detection capabilities of the very less explored photoinduced chemical oxidation method for Ru(bpy) 3 2+ CL has been investigated in detail using 32 structurally different analytes. It was carried out on-chip using peroxydisulphate and visible light and compared with well-known direct chemical oxidation approaches using Ce(IV). The analytes belong to various chemical classes such as tertiary amine, secondary amine, sulphonamide, betalactam, thiol and benzothiadiazine. Influence of detection environment on CL emission with respect to method of oxidation was evaluated by changing the buffers and pH. The photoinduced chemical oxidation exhibited more universal nature for Ru(bpy) 3 2+ CL in detection towards selected analytes. No additional enhancers, reagents, or modification in instrumental configuration were required. Wide detectability and enhanced emission has been observed for analytes from all the chemical classes when photoinduced chemical oxidation was employed. Some of these analytes are reported for the first time under photoinduced chemical oxidation like compounds from sulphonamide, betalactam, thiol and benzothiadiazine class. On the other hand, many of the selected analytes including tertiary and secondary amines such as cetirizine, azithromycin fexofenadine and proline did not produced any analytically useful CL signal (S/N=3 or above for 1μgmL -1 analyte) under chemical oxidation. The most fascinating observations was in the detection limits; for example ofloxacin was 15 times more intense with a detection limit of 5.81×10 -10 M compared to most lowest ever reported 6×10 -9 M. Earlier, penicillamine was detected at 0.1μgmL -1 after derivatization using photoinduced chemical oxidation, but in this study, we improved it to 5.82ngmL -1 without any prior derivatization. The detection limits of many other analytes were also found to be improved by several orders of magnitude under photoinduced chemical oxidation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kadavilpparampu, Afsal Mohammed; Al Lawati, Haider A. J.; Suliman, Fakhr Eldin O.
2017-08-01
For the first time, the analytical figures of merit in detection capabilities of the very less explored photoinduced chemical oxidation method for Ru(bpy)32 + CL has been investigated in detail using 32 structurally different analytes. It was carried out on-chip using peroxydisulphate and visible light and compared with well-known direct chemical oxidation approaches using Ce(IV). The analytes belong to various chemical classes such as tertiary amine, secondary amine, sulphonamide, betalactam, thiol and benzothiadiazine. Influence of detection environment on CL emission with respect to method of oxidation was evaluated by changing the buffers and pH. The photoinduced chemical oxidation exhibited more universal nature for Ru(bpy)32 + CL in detection towards selected analytes. No additional enhancers, reagents, or modification in instrumental configuration were required. Wide detectability and enhanced emission has been observed for analytes from all the chemical classes when photoinduced chemical oxidation was employed. Some of these analytes are reported for the first time under photoinduced chemical oxidation like compounds from sulphonamide, betalactam, thiol and benzothiadiazine class. On the other hand, many of the selected analytes including tertiary and secondary amines such as cetirizine, azithromycin fexofenadine and proline did not produced any analytically useful CL signal (S/N = 3 or above for 1 μgmL- 1 analyte) under chemical oxidation. The most fascinating observations was in the detection limits; for example ofloxacin was 15 times more intense with a detection limit of 5.81 × 10- 10 M compared to most lowest ever reported 6 × 10- 9 M. Earlier, penicillamine was detected at 0.1 μg mL- 1 after derivatization using photoinduced chemical oxidation, but in this study, we improved it to 5.82 ng mL- 1 without any prior derivatization. The detection limits of many other analytes were also found to be improved by several orders of magnitude under photoinduced chemical oxidation.
Rapid and sensitive analytical method for monitoring of 12 organotin compounds in natural waters.
Vahčič, Mitja; Milačič, Radmila; Sčančar, Janez
2011-03-01
A rapid analytical method for the simultaneous determination of 12 different organotin compounds (OTC): methyl-, butyl-, phenyl- and octyl-tins in natural water samples was developed. It comprises of in situ derivatisation (by using NaBEt4) of OTC in salty or fresh water sample matrix adjusted to pH 6 with Tris-citrate buffer, extraction of ethylated OTC into hexane, separation of OTC in organic phase on 15 m GC column and subsequent quantitative determination of separated OTC by ICP-MS. To optimise the pH of ethylation, phosphate, carbonate and Tris-citrate buffer were investigated alternatively to commonly applied sodium acetate - acetic acid buffer. The ethylation yields in Tris-citrate buffer were found to be better for TBT, MOcT and DOcT in comparison to commonly used acetate buffer. Iso-octane and hexane were examined as organic phase for extraction of ethylated OTC. The advantage of hexane was in its ability for quantitative determination of TMeT. GC column of 15 m in length was used for separation of studied OTC under the optimised separation conditions and its performances compared to 30 m column. The analytical method developed enables sensitive simultaneous determination of 12 different OTC and appreciably shortened analysis time in larger series of water samples. LOD's obtained for the newly developed method ranged from 0.05-0.06 ng Sn L-1 for methyl-, 0.11-0.45 ng Sn L-1 for butyl-, 0.11-0.16 ng Sn L-1 for phenyl-, and 0.07-0.10 ng Sn L-1 for octyl-tins. By applying the developed analytical method, marine water samples from the Northern Adriatic Sea containing mainly butyl- and methyl-tin species were analysed to confirm the proposed method's applicability.
Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K
2017-04-01
The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Kimberly E; Gerdes, Kirk
2013-07-01
A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
Sensory-Analytical Comparison of the Aroma of Different Horseradish Varieties (Armoracia rusticana)
Kroener, Eva-Maria; Buettner, Andrea
2018-01-01
Horseradish (Armoracia rusticana) is consumed and valued for the characteristic spicy aroma of its roots in many countries all over the world. In our present study we compare six different horseradish varieties that were grown under comparable conditions, with regard to their aroma profiles, using combined sensory-analytical methods. Horseradish extracts were analyzed through gas chromatography-olfactometry (GC-O) and their aroma-active compounds ranked according to their smell potency using the concept of aroma extract dilution analysis (AEDA). Identification was carried out through comparison of retention indices, odor qualities and mass spectra with those of reference substances. Besides some differences in relative ratios, we observed some main odorants that were common to all varieties such as 3-isopropyl-2-methoxypyrazine and allyl isothiocyanate, but also characteristics for specific varieties such as higher contents for 3-isopropyl-2-methoxypyrazine in variety Nyehemes. Moreover, three odorous compounds were detected that have not been described in horseradish roots before. PMID:29868555
NASA Astrophysics Data System (ADS)
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-01
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.
Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less
Microstructured optical fibers for terahertz waveguiding regime by using an analytical field model
NASA Astrophysics Data System (ADS)
Sharma, Dinesh Kumar; Sharma, Anurag; Tripathi, Saurabh Mani
2017-12-01
Microstructured optical fibres (MOFs) are seen as novel optical waveguide for the potential applications in the terahertz (THz) band as they provide a flexible route towards THz waveguiding. Using the analytical field model (Sharma et al., 2014) developed for index-guiding MOFs with hexagonal lattice of circular air-holes in the photonic crystal cladding; we aim to study the propagation characteristics such as effective index, near and the far-field radiation patterns and its evolution from near-to-far-field domain, spot size, effective mode area, and the numerical aperture at the THz regime. Further, we present an analytical field expression for the next higher-order mode of the MOF for studying the modal properties at terahertz frequencies. Also, we investigate the mode cut-off conditions for identifying the single-mode operation range at THz frequencies. Emphasis is put on studying the coupling characteristics of MOF geometries for efficient mode coupling. Comparisons with available experimental and numerical simulation results, e.g., those based on the full-vector finite element method (FEM) and the finite-difference frequency-domain (FDFD) method have been included.
Anumol, Tarun; Lehotay, Steven J; Stevens, Joan; Zweigenbaum, Jerry
2017-04-01
Veterinary drug residues in animal-derived foods must be monitored to ensure food safety, verify proper veterinary practices, enforce legal limits in domestic and imported foods, and for other purposes. A common goal in drug residue analysis in foods is to achieve acceptable monitoring results for as many analytes as possible, with higher priority given to the drugs of most concern, in an efficient and robust manner. The U.S. Department of Agriculture has implemented a multiclass, multi-residue method based on sample preparation using dispersive solid phase extraction (d-SPE) for cleanup and ultrahigh-performance liquid chromatography-tandem quadrupole mass spectrometry (UHPLC-QQQ) for analysis of >120 drugs at regulatory levels of concern in animal tissues. Recently, a new cleanup product called "enhanced matrix removal for lipids" (EMR-L) was commercially introduced that used a unique chemical mechanism to remove lipids from extracts. Furthermore, high-resolution quadrupole-time-of-flight (Q/TOF) for (U)HPLC detection often yields higher selectivity than targeted QQQ analyzers while allowing retroactive processing of samples for other contaminants. In this study, the use of both d-SPE and EMR-L sample preparation and UHPLC-QQQ and UHPLC-Q/TOF analysis methods for shared spiked samples of bovine muscle, kidney, and liver was compared. The results showed that the EMR-L method provided cleaner extracts overall and improved results for several anthelmintics and tranquilizers compared to the d-SPE method, but the EMR-L method gave lower recoveries for certain β-lactam antibiotics. QQQ vs. Q/TOF detection showed similar mixed performance advantages depending on analytes and matrix interferences, with an advantage to Q/TOF for greater possible analytical scope and non-targeted data collection. Either combination of approaches may be used to meet monitoring purposes, with an edge in efficiency to d-SPE, but greater instrument robustness and less matrix effects when analyzing EMR-L extracts. Graphical abstract Comparison of cleanup methods in the analysis of veterinary drug residues in bovine tissues.
Jakubowska, Natalia; Beldì, Giorgia; Peychès Bach, Aurélie; Simoneau, Catherine
2014-01-01
This paper presents the outcome of the development, optimisation and validation at European Union level of an analytical method for using poly(2,6-diphenyl phenylene oxide--PPPO), which is stipulated in Regulation (EU) No. 10/2011, as food simulant E for testing specific migration from plastics into dry foodstuffs. Two methods for fortifying respectively PPPO and a low-density polyethylene (LDPE) film with surrogate substances that are relevant to food contact were developed. A protocol for cleaning the PPPO and an efficient analytical method were developed for the quantification of butylhydroxytoluene (BHT), benzophenone (BP), diisobutylphthalate (DiBP), bis(2-ethylhexyl) adipate (DEHA) and 1,2-cyclohexanedicarboxylic acid, diisononyl ester (DINCH) from PPPO. A protocol for a migration test from plastics using small migration cells was also developed. The method was validated by an inter-laboratory comparison (ILC) with 16 national reference laboratories for food contact materials in the European Union. This allowed for the first time data to be obtained on the precision and laboratory performance of both migration and quantification. The results showed that the validation ILC was successful even when taking into account the complexity of the exercise. The results showed that the method performance was 7-9% repeatability standard deviation (rSD) for most substances (regardless of concentration), with 12% rSD for the high level of BHT and for DiBP at very low levels. The reproducibility standard deviation results for the 16 European Union laboratories were in the range of 20-30% for the quantification from PPPO (for the three levels of concentrations of the five substances) and 15-40% from migration experiments from the fortified plastic at 60°C for 10 days and subsequent quantification. Considering the lack of data previously available in the literature, this work has demonstrated that the validation of a method is possible both for migration from a film and for quantification into a corresponding simulant for specific migration.
NASA Astrophysics Data System (ADS)
Jahnke, Annika; Barber, Jonathan L.; Jones, Kevin C.; Temme, Christian
A method intercomparison study of analytical methods for the determination of neutral, volatile polyfluorinated alkyl substances (PFAS) was carried out in March, 2006. Environmental air samples were collected in triplicate at the European background site Mace Head on the west coast of Ireland, a site dominated by 'clean' westerly winds coming across the Atlantic. Extraction and analysis were performed at two laboratories active in PFAS research using their in-house methods. Airborne polyfluorinated telomer alcohols (FTOHs), fluorooctane sulfonamides and sulfonamidoethanols (FOSAs/FOSEs) as well as additional polyfluorinated compounds were investigated. Different native and isotope-labelled internal standards (IS) were applied at various steps in the analytical procedure to evaluate the different quantification strategies. Field blanks revealed no major blank problems. European background concentrations observed at Mace Head were found to be in a similar range to Arctic data reported in the literature. Due to trace-levels at the remote site, only FTOH data sets were complete and could therefore be compared between the laboratories. Additionally, FOSEs could partly be included. Data comparison revealed that despite the challenges inherent in analysis of airborne PFAS and the low concentrations, all methods applied in this study obtained similar results. However, application of isotope-labelled IS early in the analytical procedure leads to more precise results and is therefore recommended.
Prediction of the thermal environment and thermal response of simple panels exposed to radiant heat
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Ash, Robert L.
1989-01-01
A method of predicting the radiant heat flux distribution produced by a bank of tubular quartz heaters was applied to a radiant system consisting of a single unreflected lamp irradiating a flat metallic incident surface. In this manner, the method was experimentally verified for various radiant system parameter settings and used as a source of input for a finite element thermal analysis. Two finite element thermal analyses were applied to a thermal system consisting of a thin metallic panel exposed to radiant surface heating. A two-dimensional steady-state finite element thermal analysis algorithm, based on Galerkin's Method of Weighted Residuals (GFE), was formulated specifically for this problem and was used in comparison to the thermal analyzers of the Engineering Analysis Language (EAL). Both analyses allow conduction, convection, and radiation boundary conditions. Differences in the respective finite element formulation are discussed in terms of their accuracy and resulting comparison discrepancies. The thermal analyses are shown to perform well for the comparisons presented here with some important precautions about the various boundary condition models. A description of the experiment, corresponding analytical modeling, and resulting comparisons are presented.
A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium
NASA Astrophysics Data System (ADS)
Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand
2014-05-01
The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.
Sales, A; Alvarez, A; Areal, M Rodriguez; Maldonado, L; Marchisio, P; Rodríguez, M; Bedascarrasbure, E
2006-10-11
Argentinean propolis is exported to different countries, specially Japan. The market demands propolis quality control according to international standards. The analytical determination of some metals, as lead in food, is very important for their high toxicity even in low concentrations and because of their harmful effects on health. Flavonoids, the main bioactive compounds of propolis, tend to chelate metals as lead, which becomes one of the main polluting agents of propolis. The lead found in propolis may come from the atmosphere or it may be incorporated in the harvest, extraction and processing methods. The aim of this work is to evaluate lead level on Argentinean propolis determined by electrothermal atomic absorption spectrometry (ET AAS) and UV-vis spectrophotometry (UV-visS) methods, as well as the effect of harvest methods on those contents. A randomized test with three different treatments of collection was made to evaluate the effect of harvest methods. These procedures were: separating wedges (traditional), netting plastic meshes and stamping out plastic meshes. By means of the analysis of variance technique for multiple comparisons (ANOVA) it was possible to conclude that there are significant differences between scraped and mesh methods (stamped out and mosquito netting meshes). The results obtained in the present test would allow us to conclude that mesh methods are more advisable than scraped ones in order to obtain innocuous and safe propolis with minor lead contents. A statistical comparison of lead determination by both, ET AAS and UV-visS methods, demonstrated that there is not a significant difference in the results achieved with the two analytical techniques employed.
NASA Technical Reports Server (NTRS)
Simonson, M. R.; Smith, E. G.; Uhl, W. R.
1974-01-01
Analytical and experimental studies were performed to define the flowfield of annular jets, with and, without swirling flow. The analytical model treated configurations with variations of flow angularities, radius ratio, and swirl distributions. Swirl distributions characteristic of stator vanes and rotor blade rows, where the total pressure and swirl distributions are related were incorporated in the mathematical model. The experimental studies included tests of eleven nozzle models, both with and, without swirling exhaust flow. Flowfield surveys were obtained and used for comparison with the analytical model. This comparison of experimental and analytical studies served as the basis for evaluation of several empirical constants as required for application of the analysis to the general flow configuration. The analytical model developed during these studies is applicable to the evaluation of the flowfield and overall performance of the exhaust of statorless lift fan systems that contain various levels of exhaust swirl.
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Behrens, Beate; Engelen, Jeannine; Tiso, Till; Blank, Lars Mathias; Hayen, Heiko
2016-04-01
Rhamnolipids are surface-active agents with a broad application potential that are produced in complex mixtures by bacteria of the genus Pseudomonas. Analysis from fermentation broth is often characterized by laborious sample preparation and requires hyphenated analytical techniques like liquid chromatography coupled to mass spectrometry (LC-MS) to obtain detailed information about sample composition. In this study, an analytical procedure based on chromatographic method development and characterization of rhamnolipid sample material by LC-MS as well as a comparison of two sample preparation methods, i.e., liquid-liquid extraction and solid-phase extraction, is presented. Efficient separation was achieved under reversed-phase conditions using a mixed propylphenyl and octadecylsilyl-modified silica gel stationary phase. LC-MS/MS analysis of a supernatant from Pseudomonas putida strain KT2440 pVLT33_rhlABC grown on glucose as sole carbon source and purified by solid-phase extraction revealed a total of 20 congeners of di-rhamnolipids, mono-rhamnolipids, and their biosynthetic precursors 3-(3-hydroxyalkanoyloxy)alkanoic acids (HAAs) with different carbon chain lengths from C8 to C14, including three rhamnolipids with uncommon C9 and C11 fatty acid residues. LC-MS and the orcinol assay were used to evaluate the developed solid-phase extraction method in comparison with the established liquid-liquid extraction. Solid-phase extraction exhibited higher yields and reproducibility as well as lower experimental effort.
Parra, Marina; Foj, Laura; Filella, Xavier
2016-07-01
Because of its potential value in several pathologies, clinical interest in 25-hydroxy Vitamin D (25OH-D) is increasing. However, the standardisation of assays remains a significant problem. Our aim was to evaluate the performance of the novel Lumipulse G 25-OH Vitamin D assay (Fujirebio), comparing results with the Liaison (Diasorin) method. Analytical verification of the Lumipulse G 25-OH Vitamin D assay was performed. Both methods were compared using sera from 226 patients, including 111 patients with chronic renal failure (39 on haemodialysis) and 115 patients without renal failure. In addition, clinical concordance between assays was assessed. For Lumipulse G 25-OH Vitamin D assay, the limit of detection was 0.3 ng/mL, and the limit of quantification was 2.5 ng/mL with a 9.7% of coefficient of variation. Intra-and inter-assay coefficients of variation were <2.3 and <1.8% (25.4-50.0 ng/mL), respectively. Dilution linearity was in the range of 4.5-144.5 ng/mL. Method comparison resulted in a mean difference of -6.5% (95% CI from -8.8 to -4.1) for all samples between Liaison and Lumipulse G. Clinical concordance assessed by Kappa Index was 0.66. Lumipulse G 25-OH Vitamin D showed a good clinical concordance with the Liaison assay, although overall results measured in Lumipulse were higher by an average of 6.5%.
NASA Astrophysics Data System (ADS)
Lefèvre, Victor; Lopez-Pamies, Oscar
2017-02-01
This paper presents an analytical framework to construct approximate homogenization solutions for the macroscopic elastic dielectric response - under finite deformations and finite electric fields - of dielectric elastomer composites with two-phase isotropic particulate microstructures. The central idea consists in employing the homogenization solution derived in Part I of this work for ideal elastic dielectric composites within the context of a nonlinear comparison medium method - this is derived as an extension of the comparison medium method of Lopez-Pamies et al. (2013) in nonlinear elastostatics to the coupled realm of nonlinear electroelastostatics - to generate in turn a corresponding solution for composite materials with non-ideal elastic dielectric constituents. Complementary to this analytical framework, a hybrid finite-element formulation to construct homogenization solutions numerically (in three dimensions) is also presented. The proposed analytical framework is utilized to work out a general approximate homogenization solution for non-Gaussian dielectric elastomers filled with nonlinear elastic dielectric particles that may exhibit polarization saturation. The solution applies to arbitrary (non-percolative) isotropic distributions of filler particles. By construction, it is exact in the limit of small deformations and moderate electric fields. For finite deformations and finite electric fields, its accuracy is demonstrated by means of direct comparisons with finite-element solutions. Aimed at gaining physical insight into the extreme enhancement in electrostriction properties displayed by emerging dielectric elastomer composites, various cases wherein the filler particles are of poly- and mono-disperse sizes and exhibit different types of elastic dielectric behavior are discussed in detail. Contrary to an initial conjecture in the literature, it is found (inter alia) that the isotropic addition of a small volume fraction of stiff (semi-)conducting/high-permittivity particles to dielectric elastomers does not lead to the extreme electrostriction enhancements observed in experiments. It is posited that such extreme enhancements are the manifestation of interphasial phenomena.
Martan, T; Nemecek, T; Komanec, M; Ahmad, R; Zvanovec, S
2017-03-20
Detecting explosive, flammable, or toxic industrial liquids reliably and accurately is a matter of civic responsibility that cannot be treated lightly. Tapered optical fibers (TOFs) and suspended core microstructured optical fibers (SC MOFs) were separately used as sensors of liquids without being compared to each other. We present a highly sensitive time-stable TOF sensor incorporated in the pipeline system for the in-line regime of measurement. This paper is furthermore focused on the comparison of this TOF and SC MOF of similar parameters for the detection of selected liquids. A validated method that incorporates TOF and SC MOF of small core (waist) diameter for refractometric detection is presented. The principle of detection is based on the overlap of an enhanced evanescent wave with a liquid analyte that either fills the cladding holes of the SC MOF or surrounds the waist area of the TOF. Optical power within the evanescent wave for both sensing structures and selected liquid analytes is analyzed. Measurement results concerning TOF and SC MOF are compared. Calculations to ascertain the limit of detection (LOD) for each sensor and the sensitivity (S) to refractive indices of liquid analytes in the range of 1.4269 to 1.4361 were performed at a wavelength of 1550 nm with the lowest refractive index step of 0.0007. Results affirming that S=600.96 dB/RIU and LOD=0.0733 RIU for the SC MOF and S=1143.2 dB/RIU and LOD of 0.0026 RIU for the TOF sensor were achieved, clearly illustrating that TOF-based sensors can reach close to two times greater sensitivity and 30 times higher limit of detection. This paper extends the comparison of the fiber sensors by discussing the potential applications.
Geiss, S; Einax, J W
2001-07-01
Detection limit, reporting limit and limit of quantitation are analytical parameters which describe the power of analytical methods. These parameters are used for internal quality assurance and externally for competing, especially in the case of trace analysis in environmental compartments. The wide variety of possibilities for computing or obtaining these measures in literature and in legislative rules makes any comparison difficult. Additionally, a host of terms have been used within the analytical community to describe detection and quantitation capabilities. Without trying to create an order for the variety of terms, this paper is aimed at providing a practical proposal for answering the main questions for the analysts concerning quality measures above. These main questions and related parameters were explained and graphically demonstrated. Estimation and verification of these parameters are the two steps to get real measures. A rule for a practical verification is given in a table, where the analyst can read out what to measure, what to estimate and which criteria have to be fulfilled. In this manner verified parameters detection limit, reporting limit and limit of quantitation now are comparable and the analyst himself is responsible to the unambiguity and reliability of these measures.
Akutsu, Kazuhiko; Kitagawa, Yoko; Yoshimitsu, Masato; Takatori, Satoshi; Fukui, Naoki; Osakada, Masakazu; Uchida, Kotaro; Azuma, Emiko; Kajimura, Keiji
2018-05-01
Polyethylene glycol 300 is commonly used as a base material for "analyte protection" in multiresidue pesticide analysis via gas chromatography-mass spectrometry. However, the disadvantage of the co-injection method using polyethylene glycol 300 is that it causes peak instability in α-cyano pyrethroids (type II pyrethroids) such as fluvalinate. In this study, we confirmed the instability phenomenon in type II pyrethroids and developed novel analyte protectants for acetone/n-hexane mixture solution to suppress the phenomenon. Our findings revealed that among the examined additive compounds, three lipophilic ascorbic acid derivatives, 3-O-ethyl-L-ascorbic acid, 6-O-palmitoyl-L-ascorbic acid, and 6-O-stearoyl-L-ascorbic acid, could effectively stabilize the type II pyrethroids in the presence of polyethylene glycol 300. A mixture of the three ascorbic acid derivatives and polyethylene glycol 300 proved to be an effective analyte protectant for multiresidue pesticide analysis. Further, we designed and evaluated a new combination of analyte protectant compounds without using polyethylene glycol or the troublesome hydrophilic compounds. Consequently, we obtained a set of 10 medium- and long-chain saturated fatty acids as an effective analyte protectant suitable for acetone/n-hexane solution that did not cause peak instability in type II pyrethroids. These analyte protectants will be useful in multiresidue pesticide analysis by gas chromatography-mass spectrometry in terms of ruggedness and reliable quantitativeness. Graphical abstract Comparison of effectiveness of the addition of lipophilic derivatives of ascorbic acid in controlling the instability phenomenon of fluvalinate with polyethylene glycol 300.
Sin, Della Wai-Mei; Wong, Yee-Lok; Cheng, Eddie Chung-Chin; Lo, Man-Fung; Ho, Clare; Mok, Chuen-Shing; Wong, Siu-Kay
2015-04-01
This paper presents the certification of alpha-endosulfan, beta-endosulfan, and endosulfan sulfate in a candidate tea certified reference material (code: GLHK-11-03) according to the requirements of the ISO Guide 30 series. Certification of GLHK-11-03 was based on an analytical method purposely developed for the accurate measurement of the mass fraction of the target analytes in the material. An isotope dilution mass spectrometry (IDMS) method involving determination by (i) gas chromatography-negative chemical ionization-mass spectrometry (GC-NCI-MS) and (ii) gas chromatography-electron ionization-high-resolution mass spectrometry (GC-EI-HRMS) techniques was employed. The performance of the described method was demonstrated through participation in the key comparison CCQM-K95 "Mid-Polarity Analytes in Food Matrix: Mid-Polarity Pesticides in Tea" organized by the Consultative Committee for Amount of Substance-Metrology in Chemistry in 2012, where the study material was the same as the certified reference material (CRM). The values reported by using the developed method were in good agreement with the key comparison reference value (KCRV) assigned for beta-endosulfan (727 ± 14 μg kg(-1)) and endosulfan sulfate (505 ± 11 μg kg(-1)), where the degree of equivalence (DoE) values were 0.41 and 0.40, respectively. The certified values of alpha-endosulfan, beta-endosulfan, and endosulfan sulfate in dry mass fraction in GLHK-11-03 were 350, 730, and 502 μg kg(-1), respectively, and the respective expanded uncertainties, due to sample inhomogeneity, long-term and short-term stability, and variability in the characterization procedure, were 27 μg kg(-1) (7.8 %), 48 μg kg(-1) (6.6 %), and 33 μg kg(-1) (6.6 %).
Pressure and wall shear stress in blood hammer - Analytical theory.
Mei, Chiang C; Jing, Haixiao
2016-10-01
We describe an analytical theory of blood hammer in a long and stiffened artery due to sudden blockage. Based on the model of a viscous fluid in laminar flow, we derive explicit expressions of oscillatory pressure and wall shear stress. To examine the effects on local plaque formation we also allow the blood vessel radius to be slightly nonuniform. Without resorting to discrete computation, the asymptotic method of multiple scales is utilized to deal with the sharp contrast of time scales. The effects of plaque and blocking time on blood pressure and wall shear stress are studied. The theory is validated by comparison with existing water hammer experiments. Copyright © 2016. Published by Elsevier Inc.
Honzík, Petr; Podkovskiy, Alexey; Durand, Stéphane; Joly, Nicolas; Bruneau, Michel
2013-11-01
The main purpose of the paper is to contribute at presenting an analytical and a numerical modeling which would be relevant for interpreting the couplings between a circular membrane, a peripheral cavity having the same external radius as the membrane, and a thin air gap (with a geometrical discontinuity between them), and then to characterize small scale electrostatic receivers and to propose procedures that could be suitable for fitting adjustable parameters to achieve optimal behavior in terms of sensitivity and bandwidth expected. Therefore, comparison between these theoretical methods and characterization of several shapes is dealt with, which show that the models would be appropriate to address the design of such transducers.
Tidal analysis of Met rocket wind data
NASA Technical Reports Server (NTRS)
Bedinger, J. F.; Constantinides, E.
1976-01-01
A method of analyzing Met Rocket wind data is described. Modern tidal theory and specialized analytical techniques were used to resolve specific tidal modes and prevailing components in observed wind data. A representation of the wind which is continuous in both space and time was formulated. Such a representation allows direct comparison with theory, allows the derivation of other quantities such as temperature and pressure which in turn may be compared with observed values, and allows the formation of a wind model which extends over a broader range of space and time. Significant diurnal tidal modes with wavelengths of 10 and 7 km were present in the data and were resolved by the analytical technique.
Mathematical Analysis of the Effect of Rotor Geometry on Cup Anemometer Response
Sanz-Andrés, Ángel; Sorribes-Palmer, Félix
2014-01-01
The calibration coefficients of two commercial anemometers equipped with different rotors were studied. The rotor cups had the same conical shape, while the size and distance to the rotation axis varied. The analysis was based on the 2-cup positions analytical model, derived using perturbation methods to include second-order effects such as pressure distribution along the rotating cups and friction. The comparison with the experimental data indicates a nonuniform distribution of aerodynamic forces on the rotating cups, with higher forces closer to the rotating axis. The 2-cup analytical model is proven to be accurate enough to study the effect of complex forces on cup anemometer performance. PMID:25110735
An analytical optimization model for infrared image enhancement via local context
NASA Astrophysics Data System (ADS)
Xu, Yongjian; Liang, Kun; Xiong, Yiru; Wang, Hui
2017-12-01
The requirement for high-quality infrared images is constantly increasing in both military and civilian areas, and it is always associated with little distortion and appropriate contrast, while infrared images commonly have some shortcomings such as low contrast. In this paper, we propose a novel infrared image histogram enhancement algorithm based on local context. By constraining the enhanced image to have high local contrast, a regularized analytical optimization model is proposed to enhance infrared images. The local contrast is determined by evaluating whether two intensities are neighbors and calculating their differences. The comparison on 8-bit images shows that the proposed method can enhance the infrared images with more details and lower noise.
Desorption atmospheric pressure photoionization.
Haapala, Markus; Pól, Jaroslav; Saarela, Ville; Arvola, Ville; Kotiaho, Tapio; Ketola, Raimo A; Franssila, Sami; Kauppila, Tiina J; Kostiainen, Risto
2007-10-15
An ambient ionization technique for mass spectrometry, desorption atmospheric pressure photoionization (DAPPI), is presented, and its application to the rapid analysis of compounds of various polarities on surfaces is demonstrated. The DAPPI technique relies on a heated nebulizer microchip delivering a heated jet of vaporized solvent, e.g., toluene, and a photoionization lamp emitting 10-eV photons. The solvent jet is directed toward sample spots on a surface, causing the desorption of analytes from the surface. The photons emitted by the lamp ionize the analytes, which are then directed into the mass spectrometer. The limits of detection obtained with DAPPI were in the range of 56-670 fmol. Also, the direct analysis of pharmaceuticals from a tablet surface was successfully demonstrated. A comparison of the performance of DAPPI with that of the popular desorption electrospray ionization method was done with four standard compounds. DAPPI was shown to be equally or more sensitive especially in the case of less polar analytes.
Study of a vibrating plate: comparison between experimental (ESPI) and analytical results
NASA Astrophysics Data System (ADS)
Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.
2003-07-01
Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.
Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long
2018-03-05
Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Sterkers, Yvon; Varlet-Marie, Emmanuelle; Cassaing, Sophie; Brenier-Pinchart, Marie-Pierre; Brun, Sophie; Dalle, Frédéric; Delhaes, Laurence; Filisetti, Denis; Pelloux, Hervé; Yera, Hélène; Bastien, Patrick
2010-01-01
Although screening for maternal toxoplasmic seroconversion during pregnancy is based on immunodiagnostic assays, the diagnosis of clinically relevant toxoplasmosis greatly relies upon molecular methods. A problem is that this molecular diagnosis is subject to variation of performances, mainly due to a large diversity of PCR methods and primers and the lack of standardization. The present multicentric prospective study, involving eight laboratories proficient in the molecular prenatal diagnosis of toxoplasmosis, was a first step toward the harmonization of this diagnosis among university hospitals in France. Its aim was to compare the analytical performances of different PCR protocols used for Toxoplasma detection. Each center extracted the same concentrated Toxoplasma gondii suspension and tested serial dilutions of the DNA using its own assays. Differences in analytical sensitivities were observed between assays, particularly at low parasite concentrations (≤2 T. gondii genomes per reaction tube), with “performance scores” differing by a 20-fold factor among laboratories. Our data stress the fact that differences do exist in the performances of molecular assays in spite of expertise in the matter; we propose that laboratories work toward a detection threshold defined for a best sensitivity of this diagnosis. Moreover, on the one hand, intralaboratory comparisons confirmed previous studies showing that rep529 is a more adequate DNA target for this diagnosis than the widely used B1 gene. But, on the other hand, interlaboratory comparisons showed differences that appear independent of the target, primers, or technology and that hence rely essentially on proficiency and care in the optimization of PCR conditions. PMID:20610670
Tensor scale: An analytic approach with efficient computation and applications☆
Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura
2015-01-01
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148
Schmidt, Kathrin S; Mankertz, Joachim
2018-06-01
A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.
Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.
1990-01-01
Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less
Toward the Standardization of Biochar Analysis: The COST Action TD1107 Interlaboratory Comparison.
Bachmann, Hans Jörg; Bucheli, Thomas D; Dieguez-Alonso, Alba; Fabbri, Daniele; Knicker, Heike; Schmidt, Hans-Peter; Ulbricht, Axel; Becker, Roland; Buscaroli, Alessandro; Buerge, Diane; Cross, Andrew; Dickinson, Dane; Enders, Akio; Esteves, Valdemar I; Evangelou, Michael W H; Fellet, Guido; Friedrich, Kevin; Gasco Guerrero, Gabriel; Glaser, Bruno; Hanke, Ulrich M; Hanley, Kelly; Hilber, Isabel; Kalderis, Dimitrios; Leifeld, Jens; Masek, Ondrej; Mumme, Jan; Carmona, Marina Paneque; Calvelo Pereira, Roberto; Rees, Frederic; Rombolà, Alessandro G; de la Rosa, José Maria; Sakrabani, Ruben; Sohi, Saran; Soja, Gerhard; Valagussa, Massimo; Verheijen, Frank; Zehetner, Franz
2016-01-20
Biochar produced by pyrolysis of organic residues is increasingly used for soil amendment and many other applications. However, analytical methods for its physical and chemical characterization are yet far from being specifically adapted, optimized, and standardized. Therefore, COST Action TD1107 conducted an interlaboratory comparison in which 22 laboratories from 12 countries analyzed three different types of biochar for 38 physical-chemical parameters (macro- and microelements, heavy metals, polycyclic aromatic hydrocarbons, pH, electrical conductivity, and specific surface area) with their preferential methods. The data were evaluated in detail using professional interlaboratory testing software. Whereas intralaboratory repeatability was generally good or at least acceptable, interlaboratory reproducibility was mostly not (20% < mean reproducibility standard deviation < 460%). This paper contributes to better comparability of biochar data published already and provides recommendations to improve and harmonize specific methods for biochar analysis in the future.
Identifying Differentially Abundant Metabolic Pathways in Metagenomic Datasets
NASA Astrophysics Data System (ADS)
Liu, Bo; Pop, Mihai
Enabled by rapid advances in sequencing technology, metagenomic studies aim to characterize entire communities of microbes bypassing the need for culturing individual bacterial members. One major goal of such studies is to identify specific functional adaptations of microbial communities to their habitats. Here we describe a powerful analytical method (MetaPath) that can identify differentially abundant pathways in metagenomic data-sets, relying on a combination of metagenomic sequence data and prior metabolic pathway knowledge. We show that MetaPath outperforms other common approaches when evaluated on simulated datasets. We also demonstrate the power of our methods in analyzing two, publicly available, metagenomic datasets: a comparison of the gut microbiome of obese and lean twins; and a comparison of the gut microbiome of infant and adult subjects. We demonstrate that the subpathways identified by our method provide valuable insights into the biological activities of the microbiome.
Comparison of chemiluminescence methods for analysis of hydrogen peroxide and hydroxyl radicals
NASA Astrophysics Data System (ADS)
Pehrman, R.; Amme, M.; Cachoir, C.
2006-01-01
Assessment of alpha radiolysis influence on the chemistry of geologically disposed spent fuel demands analytical methods for radiolytic product determination at trace levels. Several chemiluminescence methods for the detection of radiolytic oxidants hydrogen peroxide and hydroxyl radicals are tested. Two of hydrogen peroxide methods use luminol, catalyzed by either μ-peroxidase or hemin, one uses 10-methyl-9-(p-formylphenyl)-acridinium carboxylate trifluoromethanesulfonate and one potassium periodate. All recipes are tested as batch systems in basic conditions. For hydroxyl radical detection luminophores selected are 3-hydroxyphthalic hydrazide and rutin. Both methods are tested as batch systems. The results are compared and the applicability of the methods for near-field dissolution studies is discussed.
Transverse vibrations of non-uniform beams. [combined finite element and Rayleigh-Ritz methods
NASA Technical Reports Server (NTRS)
Klein, L.
1974-01-01
The free vibrations of elastic beams with nonuniform characteristics are investigated theoretically by a new method. The new method is seen to combine the advantages of a finite element approach and of a Rayleigh-Ritz analysis. Comparison with the known analytical results for uniform beams shows good convergence of the method for natural frequencies and modes. For internal shear forces and bending moments, the rate of convergence is less rapid. Results from experiments conducted with a cantilevered helicopter blade with strong nonuniformities and also from alternative theoretical methods, indicate that the theory adequately predicts natural frequencies and mode shapes. General guidelines for efficient use of the method are presented.
Rapid determination of tartaric acid in wines.
Bastos, Sandra S T; Tafulo, Paula A R; Queirós, Raquel B; Matos, Cristina D; Sales, M Goreti F
2009-08-01
A flow-spectrophotometric method is proposed for the routine determination of tartaric acid in wines. The reaction between tartaric acid and vanadate in acetic media is carried out in flowing conditions and the subsequent colored complex is monitored at 475 nm. The stability of the complex and the corresponding formation constant are presented. The effect of wavelength and pH was evaluated by batch experiments. The selected conditions were transposed to a flow-injection analytical system. Optimization of several flow parameters such as reactor lengths, flow-rate and injection volume was carried out. Using optimized conditions, a linear behavior was observed up to 1000 microg mL(-1) tartaric acid, with a molar extinction coefficient of 450 L mg(-1) cm(-1) and +/- 1 % repeatability. Sample throughput was 25 samples per hour. The flow-spectrophotometric method was satisfactorily applied to the quantification of TA in wines from different sources. Its accuracy was confirmed by statistical comparison to the conventional Rebelein procedure and to a certified analytical method carried out in a routine laboratory.
Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.; ...
2018-02-05
A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blokhintsev, L. D.; Kadyrov, A. S.; Mukhamedzhanov, A. M.
A problem of analytical continuation of scattering data to the negative-energy region to obtain information about bound states is discussed within an exactly solvable potential model. This work is continuation of the previous one by the same authors [L. D. Blokhintsev et al., Phys. Rev. C 95, 044618 (2017)]. The goal of this paper is to determine the most effective way of analytic continuation for different systems. The d + α and α + 12C systems are considered and, for comparison, an effective-range function approach and a recently suggested Δ method [O. L. Ramírez Suárez and J.-M. Sparenberg, Phys. Rev.more » C 96, 034601 (2017).] are applied. We conclude that the method is more effective for heavier systems with large values of the Coulomb parameter, whereas for light systems with small values of the Coulomb parameter the effective-range function method might be preferable.« less
Streby, Ashleigh; Mull, Bonnie J; Levy, Karen; Hill, Vincent R
2015-05-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Foursuch assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices.
Streby, Ashleigh; Mull, Bonnie J.; Levy, Karen
2015-01-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Four such assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices. PMID:25855343
Evaluation of analytical procedures for prediction of turbulent boundary layers on a porous wall
NASA Technical Reports Server (NTRS)
Towne, C. E.
1974-01-01
An analytical study has been made to determine how well current boundary layer prediction techniques work when there is mass transfer normal to the wall. The data that were considered in this investigation were for two-dimensional, incompressible, turbulent boundary layers with suction and blowing. Some of the bleed data were taken in an adverse pressure gradient. An integral prediction method was used three different porous wall skin friction relations, in addition to a solid-surface relation for the suction cases. A numerical prediction method was also used. Comparisons were made between theoretical and experimental skin friction coefficients, displacement and momentum thicknesses, and velocity profiles. The integral method with one of the porous wall skin friction laws gave very good agreement with data for most of the cases considered. The use of the solid-surface skin friction law caused the integral to overpredict the effectiveness of the bleed. The numerical techniques also worked well for most of the cases.
Ultrafast Comparison of Personal Genomes via Precomputed Genome Fingerprints.
Glusman, Gustavo; Mauldin, Denise E; Hood, Leroy E; Robinson, Max
2017-01-01
We present an ultrafast method for comparing personal genomes. We transform the standard genome representation (lists of variants relative to a reference) into "genome fingerprints" via locality sensitive hashing. The resulting genome fingerprints can be meaningfully compared even when the input data were obtained using different sequencing technologies, processed using different pipelines, represented in different data formats and relative to different reference versions. Furthermore, genome fingerprints are robust to up to 30% missing data. Because of their reduced size, computation on the genome fingerprints is fast and requires little memory. For example, we could compute all-against-all pairwise comparisons among the 2504 genomes in the 1000 Genomes data set in 67 s at high quality (21 μs per comparison, on a single processor), and achieved a lower quality approximation in just 11 s. Efficient computation enables scaling up a variety of important genome analyses, including quantifying relatedness, recognizing duplicative sequenced genomes in a set, population reconstruction, and many others. The original genome representation cannot be reconstructed from its fingerprint, effectively decoupling genome comparison from genome interpretation; the method thus has significant implications for privacy-preserving genome analytics.
Yebra, M. Carmen
2012-01-01
A simple and rapid analytical method was developed for the determination of iron, manganese, and zinc in soluble solid samples. The method is based on continuous ultrasonic water dissolution of the sample (5–30 mg) at room temperature followed by flow injection flame atomic absorption spectrometric determination. A good precision of the whole procedure (1.2–4.6%) and a sample throughput of ca. 25 samples h–1 were obtained. The proposed green analytical method has been successfully applied for the determination of iron, manganese, and zinc in soluble solid food samples (soluble cocoa and soluble coffee) and pharmaceutical preparations (multivitamin tablets). The ranges of concentrations found were 21.4–25.61 μg g−1 for iron, 5.74–18.30 μg g−1 for manganese, and 33.27–57.90 μg g−1 for zinc in soluble solid food samples and 3.75–9.90 μg g−1 for iron, 0.47–5.05 μg g−1 for manganese, and 1.55–15.12 μg g−1 for zinc in multivitamin tablets. The accuracy of the proposed method was established by a comparison with the conventional wet acid digestion method using a paired t-test, indicating the absence of systematic errors. PMID:22567553
KEY COMPARISON: Final report on international key comparison CCQM-K53: Oxygen in nitrogen
NASA Astrophysics Data System (ADS)
Lee, Jeongsoon; Bok Lee, Jin; Moon, Dong Min; Seog Kim, Jin; van der Veen, Adriaan M. H.; Besley, Laurie; Heine, Hans-Joachim; Martin, Belén; Konopelko, L. A.; Kato, Kenji; Shimosaka, Takuya; Perez Castorena, Alejandro; Macé, Tatiana; Milton, Martin J. T.; Kelley, Mike; Guenther, Franklin; Botha, Angelique
2010-01-01
Gravimetry is used as the primary method for the preparation of primary standard gas mixtures in most national metrology institutes, and it requires the combined abilities of purity assessment, weighing technique and analytical skills. At the CCQM GAWG meeting in October 2005, it was agreed that KRISS should coordinate a key comparison, CCQM-K53, on the gravimetric preparation of gas, at a level of 100 µmol/mol of oxygen in nitrogen. KRISS compared the gravimetric value of each cylinder with an analytical instrument. A preparation for oxygen gas standard mixture requires particular care to be accurate, because oxygen is a major component of the atmosphere. Key issues for this comparison are related to (1) the gravimetric technique which needs at least two steps for dilution, (2) oxygen impurity in nitrogen, and (3) argon impurity in nitrogen. The key comparison reference value is obtained from the linear regression line (with origin) of a selected set of participants. The KCRV subset, except one, agree with each other. The standard deviation of the x-residuals of this group (which consists of NMIJ, VSL, NIST, NPL, BAM, KRISS and CENAM) is 0.056 µmol/mol and consistent with the uncertainties given to their standard mixtures. The standard deviation of the residuals of all participating laboratory is 0.182 µmol/mol. With respect to impurity analysis, overall argon amounts of the cylinders are in the region of about 3 µmol/mol however; four cylinders showed an argon amount fraction over 10 µmol/mol. Two of these are inconsistent with the KCRV subset. The explicit separation between two peaks of oxygen and argon in the GC chromatogram is essential to maintain analytical capability. Additionally oxygen impurity analysis in nitrogen is indispensable to ensure the preparative capability. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Rhea, Jeanne M; French, Deborah; Molinaro, Ross J
2013-05-01
To develop and validate liquid chromatography tandem mass spectrometry (LC-MS/MS) methods for the direct measurement of total and free testosterone in patient samples on two different analytical systems. An API 4000 and 5000 triple quadropoles were used and compared; the former is reported to be 3-5 times less sensitive, as was used to set the quantitation limits. Free testosterone was separated from the protein-bound fraction by equilibrium dialysis followed by derivatization. Either free or total testosterone, and a deuterated internal standard (d3-testosterone) were extracted by liquid-liquid extraction. The validation results were compared to two different clinical laboratories. The use of d2-testosterone was found to be unacceptable for our method. The total testosterone LC-MS/MS methods on both systems were linear over a wide concentration range of 1.5-2000ng/dL. Free testosterone was measured directly using equilibrium dialysis coupled LC-MS/MS and linear over the concentration range of 2.5-2500pg/mL. Good correlation (total testosterone, R(2)=0.96; free testosterone, R(2)=0.98) was observed between our LC-MS/MS systems and comparator laboratory. However, differences in absolute values for both free and total testosterone measurements were observed while a comparison to a second published LC-MS/MS method showed excellent correlation. Free and total testosterone measurements correlated well with clinical observations. To our knowledge, this is the first published validation of free and total testosterone methods across two analytical systems of different analytical sensitivities. A less sensitive system does not sacrifice analytical or clinical sensitivity to directly measure free and total testosterone in patient samples. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Non-axisymmetric local magnetostatic equilibrium
Candy, Jefferey M.; Belli, Emily A.
2015-03-24
In this study, we outline an approach to the problem of local equilibrium in non-axisymmetric configurations that adheres closely to Miller's original method for axisymmetric plasmas. Importantly, this method is novel in that it allows not only specification of 3D shape, but also explicit specification of the shear in the 3D shape. A spectrally-accurate method for solution of the resulting nonlinear partial differential equations is also developed. We verify the correctness of the spectral method, in the axisymmetric limit, through comparisons with an independent numerical solution. Some analytic results for the two-dimensional case are given, and the connection to Boozermore » coordinates is clarified.« less
NASA Astrophysics Data System (ADS)
Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen
2017-04-01
In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.
An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer.
Zhang, Qiang; Shi, Shengjun; Chen, Weishan
2016-03-01
An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer is proposed. The transducer is a Langevin type transducer which is composed of an exponential horn, four groups of PZT ceramics and a back beam. The exponential horn can focus the vibration energy, and can enlarge vibration amplitude and velocity efficiently. A bending vibration model of the transducer is first constructed, and subsequently an electromechanical coupling model is constructed based on the vibration model. In order to obtain the most suitable excitation position of the PZT ceramics, the effective electromechanical coupling coefficient is optimized by means of the quadratic interpolation method. When the effective electromechanical coupling coefficient reaches the peak value of 42.59%, the optimal excitation position (L1=22.52 mm) is found. The FEM method and the experimental method are used to validate the developed analytical model. Two groups of the FEM model (the Group A center bolt is not considered, and but the Group B center bolt is considered) are constructed and separately compared with the analytical model and the experimental model. Four prototype transducers around the peak value are fabricated and tested to validate the analytical model. A scanning laser Doppler vibrometer is employed to test the bending vibration shape and resonance frequency. Finally, the electromechanical coupling coefficient is tested indirectly through an impedance analyzer. Comparisons of the analytical results, FEM results and experiment results are presented, and the results show good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
McDermott, Imelda; Checkland, Kath; Harrison, Stephen; Snow, Stephanie; Coleman, Anna
2013-01-01
The language used by National Health Service (NHS) "commissioning" managers when discussing their roles and responsibilities can be seen as a manifestation of "identity work", defined as a process of identifying. This paper aims to offer a novel approach to analysing "identity work" by triangulation of multiple analytical methods, combining analysis of the content of text with analysis of its form. Fairclough's discourse analytic methodology is used as a framework. Following Fairclough, the authors use analytical methods associated with Halliday's systemic functional linguistics. While analysis of the content of interviews provides some information about NHS Commissioners' perceptions of their roles and responsibilities, analysis of the form of discourse that they use provides a more detailed and nuanced view. Overall, the authors found that commissioning managers have a higher level of certainty about what commissioning is not rather than what commissioning is; GP managers have a high level of certainty of their identity as a GP rather than as a manager; and both GP managers and non-GP managers oscillate between multiple identities depending on the different situations they are in. This paper offers a novel approach to triangulation, based not on the usual comparison of multiple data sources, but rather based on the application of multiple analytical methods to a single source of data. This paper also shows the latent uncertainty about the nature of commissioning enterprise in the English NHS.
Two-dimensional dynamic stall as simulated in a varying freestream
NASA Technical Reports Server (NTRS)
Pierce, G. A.; Kunz, D. L.; Malone, J. B.
1978-01-01
A low speed wind tunnel equipped with a axial gust generator to simulate the aerodynamic environment of a helicopter rotor was used to study the dynamic stall of a pitching blade in an effort to ascertain to what extent harmonic velocity perturbations in the freestream affect dynamic stall. The aerodynamic moment on a two dimensional, pitching blade model in both constant and pulsating airstream was measured. An operational analog computer was used to perform on-line data reduction and plots of moment versus angle of attack and work done by the moment were obtained. The data taken in the varying freestream were then compared to constant freestream data and to the results of two analytical methods. These comparisons show that the velocity perturbations have a significant effect on the pitching moment which can not be consistently predicted by the analytical methods, but had no drastic effect on the blade stability.
Comparison of three multiplex cytokine analysis systems: Luminex, SearchLight and FAST Quant.
Lash, Gendie E; Scaife, Paula J; Innes, Barbara A; Otun, Harry A; Robson, Steven C; Searle, Roger F; Bulmer, Judith N
2006-02-20
Multiplex cytokine analysis technologies have become readily available in the last five years. Two main formats exist: multiplex sandwich ELISA and bead based assays. While these have each been compared to individual ELISAs, there has been no direct comparison between the two formats. We report here the comparison of two multiplex sandwich ELISA procedures (FAST Quant and SearchLight) and a bead based assay (UpState Luminex). All three kits differed from each other for different analytes and there was no clear pattern of one system giving systematically different results than another for any analyte studied. We suggest that each system has merits and several factors including range of analytes available, prospect of development of new analytes, dynamic range of the assay, sensitivity of the assay, cost of equipment, cost of consumables, ease of use and ease of data analysis need to be considered when choosing a system for use. We also suggest that results obtained from different systems cannot be combined.
El-Masry, Amal A; Hammouda, Mohammed E A; El-Wasseef, Dalia R; El-Ashry, Saadia M
2018-02-15
Two simple, sensitive, rapid, validated and cost effective spectroscopic methods were established for quantification of antihistaminic drug azelastine (AZL) in bulk powder as well as in pharmaceutical dosage forms. In the first method (A) the absorbance difference between acidic and basic solutions was measured at 228nm, whereas in the second investigated method (B) the binary complex formed between AZL and Eosin Y in acetate buffer solution (pH3) was measured at 550nm. Different criteria that have critical influence on the intensity of absorption were deeply studied and optimized so as to achieve the highest absorption. The proposed methods obeyed Beer ' s low in the concentration range of (2.0-20.0μg·mL -1 ) and (0.5-15.0μg·mL -1 ) with % recovery±S.D. of (99.84±0.87), (100.02±0.78) for methods (A) and (B), respectively. Furthermore, the proposed methods were easily applied for quality control of pharmaceutical preparations without any conflict with its co-formulated additives, and the analytical results were compatible with those obtained by the comparison one with no significant difference as insured by student's t-test and the variance ratio F-test. Validation of the proposed methods was performed according the ICH guidelines in terms of linearity, limit of quantification, limit of detection, accuracy, precision and specificity, where the analytical results were persuasive. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Mccain, W. E.
1982-01-01
The results of a comparative study using the unsteady aerodynamic lifting surface theory, known as the Doublet Lattice method, and experimental subsonic steady- and unsteady-pressure measurements, are presented for a high-aspect-ratio supercritical wing model. Comparisons of pressure distributions due to wing angle of attack and control-surface deflections were made. In general, good correlation existed between experimental and theoretical data over most of the wing planform. The more significant deviations found between experimental and theoretical data were in the vicinity of control surfaces for both static and oscillatory control-surface deflections.
Stochastic treatment of electron multiplication without scattering in dielectrics
NASA Technical Reports Server (NTRS)
Lin, D. L.; Beers, B. L.
1981-01-01
By treating the emission of optical phonons as a Markov process, a simple analytic method is developed for calculating the electronic ionization rate per unit length for dielectrics. The effects of scattering from acoustic and optical phonons are neglected. The treatment obtains universal functions in recursive form, the theory depending on only two dimensionless energy ratios. A comparison of the present work with other numerical approaches indicates that the effect of scattering becomes important only when the electric potential energy drop in a mean free path for optical-phonon emission is less than about 25% of the ionization potential. A comparison with Monte Carlo results is also given for Teflon.
A comparison of observed and analytically derived remote sensing penetration depths for turbid water
NASA Technical Reports Server (NTRS)
Morris, W. D.; Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Guraus, E. A.
1981-01-01
The depth to which sunlight will penetrate in turbid waters was investigated. The tests were conducted in water with a single scattering albedo range, and over a range of solar elevation angles. Two different techniques were used to determine the depth of light penetration. It showed little change in the depth of sunlight penetration with changing solar elevation angle. A comparison of the penetration depths indicates that the best agreement between the two methods was achieved when the quasisingle scattering relationship was not corrected for solar angle. It is concluded that sunlight penetration is dependent on inherent water properties only.
Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system
NASA Technical Reports Server (NTRS)
Turner, Travis L.
1991-01-01
A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coletti, Chiara, E-mail: chiara.coletti@studenti.u
During the firing of bricks, mineralogical and textural transformations produce an artificial aggregate characterised by significant porosity. Particularly as regards pore-size distribution and the interconnection model, porosity is an important parameter to evaluate and predict the durability of bricks. The pore system is in fact the main element, which correlates building materials and their environment (especially in cases of aggressive weathering, e.g., salt crystallisation and freeze-thaw cycles) and determines their durability. Four industrial bricks with differing compositions and firing temperatures were analysed with “direct” and “indirect” techniques, traditional methods (mercury intrusion porosimetry, hydric tests, nitrogen adsorption) and new analytical approachesmore » based on digital image reconstruction of 2D and 3D models (back-scattered electrons and computerised X-ray micro-Tomography, respectively). The comparison of results from different analytical methods in the “overlapping ranges” of porosity and the careful reconstruction of a cumulative curve, allowed overcoming their specific limitations and achieving better knowledge on the pore system of bricks. - Highlights: •Pore-size distribution and structure of the pore system in four commercial bricks •A multi-analytical approach combining “direct” and “indirect” techniques •Traditional methods vs. new approaches based on 2D/3D digital image reconstruction •The use of “overlapping ranges” to overcome the limitations of various techniques.« less
Resolution of plasma sample mix-ups through comparison of patient antibody patterns to E. coli.
Vetter, Beatrice N; Orlowski, Vanessa; Schüpbach, Jörg; Böni, Jürg; Rühe, Bettina; Huder, Jon B
2015-12-01
Accidental sample mix-ups and the need for their swift resolution is a challenge faced by every analytical laboratory. To this end, we developed a simple immunoblot-based method, making use of a patient's characteristic plasma antibody profile to Escherichia coli (E. coli) proteins. Nitrocellulose strips of size-separated proteins from E. coli whole-cell lysates were incubated with patient plasma and visualised with an enzyme-coupled secondary antibody and substrate. Plasma samples of 20 random patients as well as five longitudinal samples of three patients were analysed for antibody band patterns, to evaluate uniqueness and consistency over time, respectively. For sample mix-ups, antibody band patterns of questionable samples were compared with samples of known identity. Comparison of anti-E. coli antibody patterns of 20 random patients showed a unique antibody profile for each patient. Antibody profiles remained consistent over time, as shown for three patients over several years. Three example cases demonstrate the use of this methodology in mis-labelling or -pipetting incidences. Our simple method for resolving plasma sample mix-ups between non-related individuals can be performed with basic laboratory equipment and thus can easily be adopted by analytical laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.
Response of the Alliance 1 Proof-of-Concept Airplane Under Gust Loads
NASA Technical Reports Server (NTRS)
Naser, A. S.; Pototzky, A. S.; Spain, C. V.
2001-01-01
This report presents the work performed by Lockheed Martin's Langley Program Office in support of NASA's Environmental Research Aircraft and Sensor Technology (ERAST) program. The primary purpose of this work was to develop and demonstrate a gust analysis method which accounts for the span-wise variation of gust velocity. This is important because these unmanned aircraft having high aspect ratios and low wing loading are very flexible, and fly at low speeds. The main focus of the work was therefore to perform a two-dimensional Power Spectrum Density (PSD) analysis of the Alliance 1 Proof-of-Concept Unmanned Aircraft, As of this writing, none of the aircraft described in this report have been constructed. They are concepts represented by analytical models. The process first involved the development of suitable structural and aeroelastic Finite Element Models (FEM). This was followed by development of a one-dimensional PSD gust analysis, and then the two-dimensional (PSD) analysis of the Alliance 1. For further validation and comparison, two additional analyses were performed. A two-dimensional PSD gust analysis was performed on a simplet MSC/NASTRAN example problem. Finally a one-dimensional discrete gust analysis was performed on Alliance 1. This report describes this process, shows the relevant comparisons between analytical methods, and discusses the physical meanings of the results.
Martinuzzo, Marta E; Duboscq, Cristina; Lopez, Marina S; Barrera, Luis H; Vinuales, Estela S; Ceresetto, Jose; Forastiero, Ricardo R; Oyhamburu, Jose
2018-06-01
Rivaroxaban oral anticoagulant does not need laboratory monitoring, but in some situations plasma level measurement is useful. The objective of this paper was to verify analytical performance and compare two rivaroxaban calibrated anti Xa assays/coagulometer systems with specific or other branch calibrators. In 59 samples drawn at trough or peak from patients taking rivaroxaban, plasma levels were measured by HemosIL Liquid anti Xa in ACLTOP 300/500, and STA liquid Anti Xa in TCoag Destiny Plus. HemosIL and STA rivaroxaban calibrators and controls were used. CLSI guideline procedures EP15A3 for precision and trueness, EP6 for linearity, and EP9 for methods comparison were used. Coefficient of variation within run and total precision (CVR and CVWL respectively) of plasmatic rivaroxaban were < 4.2 and < 4.85% and BIAS < 7.4 and < 6.5%, for HemosIL-ACL TOP and STA-Destiny systems, respectively. Linearity verification 8 - 525 ng/mL a Deming regression for methods comparison presented R 0.963, 0.968 and 0.982, with a mean CV 13.3% when using different systems and calibrations. The analytical performance of plasma rivaroxaban was acceptable in both systems, and results from reagent/coagulometer systems are comparable even when calibrating with different branch material.
Piešťanský, Juraj; Maráková, Katarína; Kovaľ, Marián; Mikuš, Peter
2014-09-05
The advanced two dimensional isotachophoresis (ITP)-capillary zone electrophoresis (CZE) hyphenated with tandem mass spectrometry (MS/MS, here triple quadrupole, QqQ) was developed in this work to demonstrate analytical potentialities of this approach in the analysis of drugs in multicomponent ionic matrices. Pheniramine (PHM), phenylephrine (PHE), paracetamol (PCM) and their potential metabolic products were taken for the analysis by the ITP-CZE-ESI-QqQ technique working in hydrodynamically closed CE separation system and then a comparison with the conventional (hydrodynamically open) CZE-ESI-QqQ technique was made. The ITP-CZE-ESI-QqQ method was favorable in terms of obtainable selectivity (due to highly effective heart-cut analysis), concentration limits of detection (LOD at pgmL(-1) levels due to enhanced sample load capacity and ITP preconcentration), sample handling (on-line sample pretreatment, i.e. clean-up, preconcentration, preseparation), and, by that, possibilities for future automation and miniaturization. On the other hand, this experimental arrangement, in contrast to the CZE-ESI-QqQ arrangement supported by an electroosmotic flow, is principally limited to the analysis of uniformly (i.e. positively or negatively) charged analytes in one run without any possibilities to analyze neutral compounds (here, PCM and neutral or acidic metabolites of the drugs had to be excluded from the analysis). Hence, these general characteristics should be considered when choosing a proper analytical CE-MS approach for a given biomedical application. Here, the analytical potential of the ITP-CZE-ESI-QqQ method was demonstrated showing the real time profiles of excreted targeted drugs and metabolite (PHM, PHE, M-PHM) in human urine after the administration of one dose of Theraflu(®) to the volunteers. Copyright © 2014 Elsevier B.V. All rights reserved.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, J; Grassberger, C; Paganetti, H
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less
Hill, Ryan C; Oman, Trent J; Wang, Xiujuan; Shan, Guomin; Schafer, Barry; Herman, Rod A; Tobias, Rowel; Shippar, Jeff; Malayappan, Bhaskar; Sheng, Li; Xu, Austin; Bradshaw, Jason
2017-07-12
As part of the regulatory approval process in Europe, comparison of endogenous soybean allergen levels between genetically engineered (GE) and non-GE plants has been requested. A quantitative multiplex analytical method using tandem mass spectrometry was developed and validated to measure 10 potential soybean allergens from soybean seed. The analytical method was implemented at six laboratories to demonstrate the robustness of the method and further applied to three soybean field studies across multiple growing seasons (including 21 non-GE soybean varieties) to assess the natural variation of allergen levels. The results show environmental factors contribute more than genetic factors to the large variation in allergen abundance (2- to 50-fold between environmental replicates) as well as a large contribution of Gly m 5 and Gly m 6 to the total allergen profile, calling into question the scientific rational for measurement of endogenous allergen levels between GE and non-GE varieties in the safety assessment.
A model and numerical method for compressible flows with capillary effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidmayer, Kevin, E-mail: kevin.schmidmayer@univ-amu.fr; Petitpas, Fabien, E-mail: fabien.petitpas@univ-amu.fr; Daniel, Eric, E-mail: eric.daniel@univ-amu.fr
2017-04-01
A new model for interface problems with capillary effects in compressible fluids is presented together with a specific numerical method to treat capillary flows and pressure waves propagation. This new multiphase model is in agreement with physical principles of conservation and respects the second law of thermodynamics. A new numerical method is also proposed where the global system of equations is split into several submodels. Each submodel is hyperbolic or weakly hyperbolic and can be solved with an adequate numerical method. This method is tested and validated thanks to comparisons with analytical solutions (Laplace law) and with experimental results onmore » droplet breakup induced by a shock wave.« less
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-05
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, G.E.; Huls, T.A.
1970-10-01
The Saltzman, Phenoldisulfonic Acid and Nondispersive Infrared methods have been compared for the determination of oxides of nitrogen in automobile exhaust. The main purpose of this investigation was to determine whether the Nondispersive Infrared method could be used as a possible replacement for the Saltzman method. Results show that the Nondispersive Infrared analyzer can be used to measure NO/sub x/ in exhaust gases with advantages over both the Saltzman and Phenoldisulfonic Acid methods. These advantages include simplicity, speed, less complicated analytical technique, and the fact that it is better adapted to be carried out by technicians at the test site.
Le, Minh Uyen Thi; Son, Jin Gyeong; Shon, Hyun Kyoung; Park, Jeong Hyang; Lee, Sung Bae; Lee, Tae Geol
2018-03-30
Time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging elucidates molecular distributions in tissue sections, providing useful information about the metabolic pathways linked to diseases. However, delocalization of the analytes and inadequate tissue adherence during sample preparation are among some of the unfortunate phenomena associated with this technique due to their role in the reduction of the quality, reliability, and spatial resolution of the ToF-SIMS images. For these reasons, ToF-SIMS imaging requires a more rigorous sample preparation method in order to preserve the natural state of the tissues. The traditional thaw-mounting method is particularly vulnerable to altered distributions of the analytes due to thermal effects, as well as to tissue shrinkage. In the present study, the authors made comparisons of different tissue mounting methods, including the thaw-mounting method. The authors used conductive tape as the tissue-mounting material on the substrate because it does not require heat from the finger for the tissue section to adhere to the substrate and can reduce charge accumulation during data acquisition. With the conductive-tape sampling method, they were able to acquire reproducible tissue sections and high-quality images without redistribution of the molecules. Also, the authors were successful in preserving the natural states and chemical distributions of the different components of fat metabolites such as diacylglycerol and fatty acids by using the tape-supported sampling in microRNA-14 (miR-14) deleted Drosophila models. The method highlighted here shows an improvement in the accuracy of mass spectrometric imaging of tissue samples.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
NASA Astrophysics Data System (ADS)
Zhou, Xuhong; Cao, Liang; Chen, Y. Frank; Liu, Jiepeng; Li, Jiang
2016-01-01
The developed pre-stressed cable reinforced concrete truss (PCT) floor system is a relatively new floor structure, which can be applied to various long-span structures such as buildings, stadiums, and bridges. Due to the lighter mass and longer span, floor vibration would be a serviceability concern problem for such systems. In this paper, field testing and theoretical analysis for the PCT floor system were conducted. Specifically, heel-drop impact and walking tests were performed on the PCT floor system to capture the dynamic properties including natural frequencies, mode shapes, damping ratios, and acceleration response. The PCT floor system was found to be a low frequency (<10 Hz) and low damping (damping ratio<2 percent) structural system. The comparison of the experimental results with the AISC's limiting values indicates that the investigated PCT system exhibits satisfactory vibration perceptibility, however. The analytical solution obtained from the weighted residual method agrees well with the experimental results and thus validates the proposed analytical expression. Sensitivity studies using the analytical solution were also conducted to investigate the vibration performance of the PCT floor system.
Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction
NASA Technical Reports Server (NTRS)
Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun
2007-01-01
The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.
A shipboard comparison of analytic methods for ballast water compliance monitoring
NASA Astrophysics Data System (ADS)
Bradie, Johanna; Broeg, Katja; Gianoli, Claudio; He, Jianjun; Heitmüller, Susanne; Curto, Alberto Lo; Nakata, Akiko; Rolke, Manfred; Schillak, Lothar; Stehouwer, Peter; Vanden Byllaardt, Julie; Veldhuis, Marcel; Welschmeyer, Nick; Younan, Lawrence; Zaake, André; Bailey, Sarah
2018-03-01
Promising approaches for indicative analysis of ballast water samples have been developed that require study in the field to examine their utility for determining compliance with the International Convention for the Control and Management of Ships' Ballast Water and Sediments. To address this gap, a voyage was undertaken on board the RV Meteor, sailing the North Atlantic Ocean from Mindelo (Cape Verde) to Hamburg (Germany) during June 4-15, 2015. Trials were conducted on local sea water taken up by the ship's ballast system at multiple locations along the trip, including open ocean, North Sea, and coastal water, to evaluate a number of analytic methods that measure the numeric concentration or biomass of viable organisms according to two size categories (≥ 50 μm in minimum dimension: 7 techniques, ≥ 10 μm and < 50 μm: 9 techniques). Water samples were analyzed in parallel to determine whether results were similar between methods and whether rapid, indicative methods offer comparable results to standard, time- and labor-intensive detailed methods (e.g. microscopy) and high-end scientific approaches (e.g. flow cytometry). Several promising indicative methods were identified that showed high correlation with microscopy, but allow much quicker processing and require less expert knowledge. This study is the first to concurrently use a large number of analytic tools to examine a variety of ballast water samples on board an operational ship in the field. Results are useful to identify the merits of each method and can serve as a basis for further improvement and development of tools and methodologies for ballast water compliance monitoring.
Horbowy, Jan; Tomczak, Maciej T
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Horbowy, Jan
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850
Wang, Jin; Liu, Laping; Shi, Ludi; Yi, Tingquan; Wen, Yuxia; Wang, Juanli; Liu, Shuhui
2017-01-01
For the analysis of edible oils, saponification is well known as a useful method for eliminating oil matrices. The conventional approach is conducted with alcoholic alkali; it consumes a large volume of organic solvents and impedes the retrieval of analytes by microextraction. In this study, a low-organic-solvent-consuming method has been developed for the analysis of benzo[a]pyrene in edible oils by high-performance liquid chromatography with fluorescence detection. Sample treatment involves aqueous alkaline saponification, assisted by a phase-transfer catalyst, and selective in situ extraction of the analyte with a supramolecular solvent. Comparison of the chromatograms of the oil extracts obtained by different microextraction methods showed that the supramolecular solvent has a better clean-up effect for the unsaponifiable matter from oil matrices. The method offered excellent linearity over a range of 0.03- 5.0 ng mL -1 (r > 0.999). Recovery rates varied from 94 to 102% (RSDs <5.0%). The detection limit and quantification limit were 0.06 and 0.19 μg kg -1 , respectively. The proposed method was applied for the analysis of 52 edible oils collected online in China; the analyte contents of 23 tested oil samples exceeded the maximum limit of 2 μg kg -1 for benzo[a]pyrene set by the Commission Regulation of the European Union. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hercegová, Andrea; Dömötörová, Milena; Kruzlicová, Dása; Matisová, Eva
2006-05-01
Four sample preparation techniques were compared for the ultratrace analysis of pesticide residues in baby food: (a) modified Schenck's method based on ACN extraction with SPE cleaning; (b) quick, easy, cheap, effective, rugged, and safe (QuEChERS) method based on ACN extraction and dispersive SPE; (c) modified QuEChERS method which utilizes column-based SPE instead of dispersive SPE; and (d) matrix solid phase dispersion (MSPD). The methods were combined with fast gas chromatographic-mass spectrometric analysis. The effectiveness of clean-up of the final extract was determined by comparison of the chromatograms obtained. Time consumption, laboriousness, demands on glassware and working place, and consumption of chemicals, especially solvents, increase in the following order QuEChERS < modified QuEChERS < MSPD < modified Schenck's method. All methods offer satisfactory analytical characteristics at the concentration levels of 5, 10, and 100 microg/kg in terms of recoveries and repeatability. Recoveries obtained for the modified QuEChERS method were lower than for the original QuEChERS. In general the best LOQs were obtained for the modified Schenck's method. Modified QuEChERS method provides 21-72% better LOQs than the original method.
Enzinger, Ewald; Morrison, Geoffrey Stewart
2017-08-01
In a 2012 case in New South Wales, Australia, the identity of a speaker on several audio recordings was in question. Forensic voice comparison testimony was presented based on an auditory-acoustic-phonetic-spectrographic analysis. No empirical demonstration of the validity and reliability of the analytical methodology was presented. Unlike the admissibility standards in some other jurisdictions (e.g., US Federal Rule of Evidence 702 and the Daubert criteria, or England & Wales Criminal Practice Directions 19A), Australia's Unified Evidence Acts do not require demonstration of the validity and reliability of analytical methods and their implementation before testimony based upon them is presented in court. The present paper reports on empirical tests of the performance of an acoustic-phonetic-statistical forensic voice comparison system which exploited the same features as were the focus of the auditory-acoustic-phonetic-spectrographic analysis in the case, i.e., second-formant (F2) trajectories in /o/ tokens and mean fundamental frequency (f0). The tests were conducted under conditions similar to those in the case. The performance of the acoustic-phonetic-statistical system was very poor compared to that of an automatic system. Copyright © 2017 Elsevier B.V. All rights reserved.
Perich, C; Ricós, C; Alvarez, V; Biosca, C; Boned, B; Cava, F; Doménech, M V; Fernández-Calle, P; Fernández-Fernández, P; García-Lario, J V; Minchinela, J; Simón, M; Jansen, R
2014-05-15
Current external quality assurance schemes have been classified into six categories, according to their ability to verify the degree of standardization of the participating measurement procedures. SKML (Netherlands) is a Category 1 EQA scheme (commutable EQA materials with values assigned by reference methods), whereas SEQC (Spain) is a Category 5 scheme (replicate analyses of non-commutable materials with no values assigned by reference methods). The results obtained by a group of Spanish laboratories participating in a pilot study organized by SKML are examined, with the aim of pointing out the improvements over our current scheme that a Category 1 program could provide. Imprecision and bias are calculated for each analyte and laboratory, and compared with quality specifications derived from biological variation. Of the 26 analytes studied, 9 had results comparable with those from reference methods, and 10 analytes did not have comparable results. The remaining 7 analytes measured did not have available reference method values, and in these cases, comparison with the peer group showed comparable results. The reasons for disagreement in the second group can be summarized as: use of non-standard methods (IFCC without exogenous pyridoxal phosphate for AST and ALT, Jaffé kinetic at low-normal creatinine concentrations and with eGFR); non-commutability of the reference material used to assign values to the routine calibrator (calcium, magnesium and sodium); use of reference materials without established commutability instead of reference methods for AST and GGT, and lack of a systematic effort by manufacturers to harmonize results. Results obtained in this work demonstrate the important role of external quality assurance programs using commutable materials with values assigned by reference methods to correctly monitor the standardization of laboratory tests with consequent minimization of risk to patients. Copyright © 2013 Elsevier B.V. All rights reserved.
Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn
2012-07-01
The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are unavailable under MRM transition.
Characterization of Compton-scatter imaging with an analytical simulation method
Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663
Characterization of Compton-scatter imaging with an analytical simulation method
NASA Astrophysics Data System (ADS)
Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
Comparison of NMR simulations of porous media derived from analytical and voxelized representations.
Jin, Guodong; Torres-Verdín, Carlos; Toumelin, Emmanuel
2009-10-01
We develop and compare two formulations of the random-walk method, grain-based and voxel-based, to simulate the nuclear-magnetic-resonance (NMR) response of fluids contained in various models of porous media. The grain-based approach uses a spherical grain pack as input, where the solid surface is analytically defined without an approximation. In the voxel-based approach, the input is a computer-tomography or computer-generated image of reconstructed porous media. Implementation of the two approaches is largely the same, except for the representation of porous media. For comparison, both approaches are applied to various analytical and digitized models of porous media: isolated spherical pore, simple cubic packing of spheres, and random packings of monodisperse and polydisperse spheres. We find that spin magnetization decays much faster in the digitized models than in their analytical counterparts. The difference in decay rate relates to the overestimation of surface area due to the discretization of the sample; it cannot be eliminated even if the voxel size decreases. However, once considering the effect of surface-area increase in the simulation of surface relaxation, good quantitative agreement is found between the two approaches. Different grain or pore shapes entail different rates of increase of surface area, whereupon we emphasize that the value of the "surface-area-corrected" coefficient may not be universal. Using an example of X-ray-CT image of Fontainebleau rock sample, we show that voxel size has a significant effect on the calculated surface area and, therefore, on the numerically simulated magnetization response.
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can be applied to any experimental problem in which multianalyte results obtained either by several analytical procedures, analysts, instruments, or laboratories need to be compared.
Wetherbee, Gregory A.; Latysh, Natalie E.; Greene, Shannon M.
2006-01-01
The U.S. Geological Survey (USGS) used five programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and two programs to provide external quality-assurance monitoring for the NADP/Mercury Deposition Network (NADP/MDN) during 2004. An intersite-comparison program was used to estimate accuracy and precision of field-measured pH and specific-conductance. The variability and bias of NADP/NTN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using the sample-handling evaluation (SHE), field-audit, and interlaboratory-comparison programs. Overall variability of NADP/NTN data was estimated using a collocated-sampler program. Variability and bias of NADP/MDN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using a system-blank program and an interlaboratory-comparison program. In two intersite-comparison studies, approximately 89 percent of NADP/NTN site operators met the pH measurement accuracy goals, and 94.7 to 97.1 percent of NADP/NTN site operators met the accuracy goals for specific conductance. Field chemistry measurements were discontinued by NADP at the end of 2004. As a result, the USGS intersite-comparison program also was discontinued at the end of 2004. Variability and bias in NADP/NTN data due to sample handling and shipping were estimated from paired-sample concentration differences and specific conductance differences obtained for the SHE program. Median absolute errors (MAEs) equal to less than 3 percent were indicated for all measured analytes except potassium and hydrogen ion. Positive bias was indicated for most of the measured analytes except for calcium, hydrogen ion and specific conductance. Negative bias for hydrogen ion and specific conductance indicated loss of hydrogen ion and decreased specific conductance from contact of the sample with the collector bucket. Field-audit results for 2004 indicate dissolved analyte loss in more than one-half of NADP/NTN wet-deposition samples for all analytes except chloride. Concentrations of contaminants also were estimated from field-audit data. On the basis of 2004 field-audit results, at least 25 percent of the 2004 NADP/NTN concentrations for sodium, potassium, and chloride were lower than the maximum sodium, potassium, and chloride contamination likely to be found in 90 percent of the samples with 90-percent confidence. Variability and bias in NADP/NTN data attributed to chemical analysis by the NADP Central Analytical Laboratory (CAL) were comparable to the variability and bias estimated for other laboratories participating in the interlaboratory-comparison program for all analytes. Variability in NADP/NTN ammonium data evident in 2002-03 was reduced substantially during 2004. Sulfate, hydrogen-ion, and specific conductance data reported by CAL during 2004 were positively biased. A significant (a = 0.05) bias was identified for CAL sodium, potassium, ammonium, and nitrate data, but the absolute values of the median differences for these analytes were less than the method detection limits. No detections were reported for CAL analyses of deionized-water samples, indicating that contamination was not a problem for CAL. Control charts show that CAL data were within statistical control during at least 90 percent of 2004. Most 2004 CAL interlaboratory-comparison results for synthetic wet-deposition solutions were within ?10 percent of the most probable values (MPVs) for solution concentrations except for chloride, nitrate, sulfate, and specific conductance results from one sample in November and one specific conductance result in December. Overall variability of NADP/NTN wet-deposition measurements was estimated during water year 2004 by the median absolute errors for weekly wet-deposition sample concentrations and precipitation measurements for tw
NASA Astrophysics Data System (ADS)
Frolov, S. V.; Potlov, A. Yu.; Petrov, D. A.; Proskurin, S. G.
2017-03-01
A method of optical coherence tomography (OCT) structural images reconstruction using Monte Carlo simulations is described. Biological object is considered as a set of 3D elements that allow simulation of media, structure of which cannot be described analytically. Each voxel is characterized by its refractive index and anisotropy parameter, scattering and absorption coefficients. B-scans of the inner structure are used to reconstruct a simulated image instead of analytical representation of the boundary geometry. Henye-Greenstein scattering function, Beer-Lambert-Bouguer law and Fresnel equations are used for photon transport description. Efficiency of the described technique is checked by the comparison of the simulated and experimentally acquired A-scans.
Correlation of analytical and experimental hot structure vibration results
NASA Technical Reports Server (NTRS)
Kehoe, Michael W.; Deaton, Vivian C.
1993-01-01
High surface temperatures and temperature gradients can affect the vibratory characteristics and stability of aircraft structures. Aircraft designers are relying more on finite-element model analysis methods to ensure sufficient vehicle structural dynamic stability throughout the desired flight envelope. Analysis codes that predict these thermal effects must be correlated and verified with experimental data. Experimental modal data for aluminum, titanium, and fiberglass plates heated at uniform, nonuniform, and transient heating conditions are presented. The data show the effect of heat on each plate's modal characteristics, a comparison of predicted and measured plate vibration frequencies, the measured modal damping, and the effect of modeling material property changes and thermal stresses on the accuracy of the analytical results at nonuniform and transient heating conditions.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2015-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology demonstration via flight-testing. Hypersonic Inflatable Aerodynamic Decelerator (HIAD) architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. This publication summarizes results comparing analytical results with test data for two concepts subjected to representative entry, static loading. The level of agreement and ability to predict the load distribution is considered sufficient to enable analytical predictions to be used in the design process.
The liquid fuel jet in subsonic crossflow
NASA Technical Reports Server (NTRS)
Nguyen, T. T.; Karagozian, A. R.
1990-01-01
An analytical/numerical model is described which predicts the behavior of nonreacting and reacting liquid jets injected transversely into subsonic cross flow. The compressible flowfield about the elliptical jet cross section is solved at various locations along the jet trajectory by analytical means for free-stream local Mach number perpendicular to jet cross section smaller than 0.3 and by numerical means for free-stream local Mach number perpendicular to jet cross section in the range 0.3-1.0. External and internal boundary layers along the jet cross section are solved by integral and numerical methods, and the mass losses due to boundary layer shedding, evaporation, and combustion are calculated and incorporated into the trajectory calculation. Comparison of predicted trajectories is made with limited experimental observations.
NASA Technical Reports Server (NTRS)
Kubala, A.; Black, D.; Szebehely, V.
1993-01-01
A comparison is made between the stability criteria of Hill and that of Laplace to determine the stability of outer planetary orbits encircling binary stars. The restricted, analytically determined results of Hill's method by Szebehely and coworkers and the general, numerically integrated results of Laplace's method by Graziani and Black (1981) are compared for varying values of the mass parameter mu. For mu = 0 to 0.15, the closest orbit (lower limit of radius) an outer planet in a binary system can have and still remain stable is determined by Hill's stability criterion. For mu greater than 0.15, the critical radius is determined by Laplace's stability criterion. It appears that the Graziani-Black stability criterion describes the critical orbit within a few percent for all values of mu.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayce, D.; Khayat, R.E.; Derdouri, A.
The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less
Comparison of analysis and flight test data for a drone aircraft with active flutter suppression
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Pototzky, A. S.
1981-01-01
This paper presents a comparison of analysis and flight test data for a drone aircraft equipped with an active flutter suppression system. Emphasis is placed on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are presented for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. In addition to presenting the mathematical models and a brief description of existing analytical techniques, an alternative analytical technique for obtaining closed-loop results is presented.
NASA Astrophysics Data System (ADS)
Swearingen, Michelle E.
2004-04-01
An analytic model, developed in cylindrical coordinates, is described for the scattering of a spherical wave off a semi-infinite reight cylinder placed normal to a ground surface. The motivation for the research is to have a model with which one can simulate scattering from a single tree and which can be used as a fundamental element in a model for estimating the attenuation in a forest comprised of multiple tree trunks. Comparisons are made to the plane wave case, the transparent cylinder case, and the rigid and soft ground cases as a method of theoretically verifying the model for the contemplated range of model parameters. Agreement is regarded as excellent for these benchmark cases. Model sensitivity to five parameters is also explored. An experiment was performed to study the scattering from a cylinder normal to a ground surface. The data from the experiment is analyzed with a transfer function method to yield frequency and impulse responses, and calculations based on the analytic model are compared to the experimental data. Thesis advisor: David C. Swanson Copies of this thesis written in English can be obtained from
NASA Astrophysics Data System (ADS)
Wang, Xin; Gao, Jun; Fan, Zhiguo; Roberts, Nicholas W.
2016-06-01
We present a computationally inexpensive analytical model for simulating celestial polarization patterns in variable conditions. We combine both the singularity theory of Berry et al (2004 New J. Phys. 6 162) and the intensity model of Perez et al (1993 Sol. Energy 50 235-245) such that our single model describes three key sets of data: (1) the overhead distribution of the degree of polarization as well as the existence of neutral points in the sky; (2) the change in sky polarization as a function of the turbidity of the atmosphere; and (3) sky polarization patterns as a function of wavelength, calculated in this work from the ultra-violet to the near infra-red. To verify the performance of our model we generate accurate reference data using a numerical radiative transfer model and statistical comparisons between these two methods demonstrate no significant difference in almost all situations. The development of our analytical model provides a novel method for efficiently calculating the overhead skylight polarization pattern. This provides a new tool of particular relevance for our understanding of animals that use the celestial polarization pattern as a source of visual information.
Mechanics of the tapered interference fit in dental implants.
Bozkaya, Dinçer; Müftü, Sinan
2003-11-01
In evaluation of the long-term success of a dental implant, the reliability and the stability of the implant-abutment interface plays a great role. Tapered interference fits provide a reliable connection method between the abutment and the implant. In this work, the mechanics of the tapered interference fits were analyzed using a closed-form formula and the finite element (FE) method. An analytical solution, which is used to predict the contact pressure in a straight interference, was modified to predict the contact pressure in the tapered implant-abutment interface. Elastic-plastic FE analysis was used to simulate the implant and abutment material behavior. The validity and the applicability of the analytical solution were investigated by comparisons with the FE model for a range of problem parameters. It was shown that the analytical solution could be used to determine the pull-out force and loosening-torque with 5-10% error. Detailed analysis of the stress distribution due to tapered interference fit, in a commercially available, abutment-implant system was carried out. This analysis shows that plastic deformation in the implant limits the increase in the pull-out force that would have been otherwise predicted by higher interference values.
NASA Astrophysics Data System (ADS)
Wang, Y. B.; Zhu, X. W.; Dai, H. H.
2016-08-01
Though widely used in modelling nano- and micro- structures, Eringen's differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen's two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings are considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.
Lloyd, Jeffrey T.; Clayton, John D.; Austin, Ryan A.; ...
2015-07-10
Background: The shock response of metallic single crystals can be captured using a micro-mechanical description of the thermoelastic-viscoplastic material response; however, using a such a description within the context of traditional numerical methods may introduce a physical artifacts. Advantages and disadvantages of complex material descriptions, in particular the viscoplastic response, must be framed within approximations introduced by numerical methods. Methods: Three methods of modeling the shock response of metallic single crystals are summarized: finite difference simulations, steady wave simulations, and algebraic solutions of the Rankine-Hugoniot jump conditions. For the former two numerical techniques, a dislocation density based framework describes themore » rate- and temperature-dependent shear strength on each slip system. For the latter analytical technique, a simple (two-parameter) rate- and temperature-independent linear hardening description is necessarily invoked to enable simultaneous solution of the governing equations. For all models, the same nonlinear thermoelastic energy potential incorporating elastic constants of up to order 3 is applied. Results: Solutions are compared for plate impact of highly symmetric orientations (all three methods) and low symmetry orientations (numerical methods only) of aluminum single crystals shocked to 5 GPa (weak shock regime) and 25 GPa (overdriven regime). Conclusions: For weak shocks, results of the two numerical methods are very similar, regardless of crystallographic orientation. For strong shocks, artificial viscosity affects the finite difference solution, and effects of transverse waves for the lower symmetry orientations not captured by the steady wave method become important. The analytical solution, which can only be applied to highly symmetric orientations, provides reasonable accuracy with regards to prediction of most variables in the final shocked state but, by construction, does not provide insight into the shock structure afforded by the numerical methods.« less
NASA Technical Reports Server (NTRS)
Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.
1989-01-01
Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.
Remane, Daniela; Grunwald, Soeren; Hoeke, Henrike; Mueller, Andrea; Roeder, Stefan; von Bergen, Martin; Wissenbach, Dirk K
2015-08-15
During the last decades exposure sciences and epidemiological studies attracts more attention to unravel the mechanisms for the development of chronic diseases. According to this an existing HPLC-DAD method for determination of creatinine in urine samples was expended for seven analytes and validated. Creatinine, uric acid, homovanillic acid, niacinamide, hippuric acid, indole-3-acetic acid, and 2-methylhippuric acid were separated by gradient elution (formate buffer/methanol) using an Eclipse Plus C18 Rapid Resolution column (4.6mm×100mm). No interfering signals were detected in mobile phase. After injection of blank urine samples signals for the endogenous compounds but no interferences were detected. All analytes were linear in the selected calibration range and a non weighted calibration model was chosen. Bias, intra-day and inter-day precision for all analytes were below 20% for quality control (QC) low and below 10% for QC medium and high. The limits of quantification in mobile phase were in line with reported reference values but had to be adjusted in urine for homovanillic acid (45mg/L), niacinamide 58.5(mg/L), and indole-3-acetic acid (63mg/L). Comparison of creatinine data obtained by the existing method with those of the developed method showing differences from -120mg/L to +110mg/L with a mean of differences of 29.0mg/L for 50 authentic urine samples. Analyzing 50 authentic urine samples, uric acid, creatinine, hippuric acid, and 2-methylhippuric acid were detected in (nearly) all samples. However, homovanillic acid was detected in 40%, niacinamide in 4% and indole-3-acetic acid was never detected within the selected samples. Copyright © 2015 Elsevier B.V. All rights reserved.
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
NASA Technical Reports Server (NTRS)
Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.
2017-01-01
This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.
Discovery and structural elucidation of the illegal azo dye Basic Red 46 in sumac spice.
Ruf, J; Walter, P; Kandler, H; Kaufmann, A
2012-01-01
An unknown red dye was discovered in a sumac spice sample during routine analysis for Sudan dyes. LC-DAD and LC-MS/MS did not reveal the identity of the red substance. Nevertheless, using LC-high-resolution MS and isotope ratio comparisons the structure was identified as Basic Red 46. The identity of the dye was further confirmed by comparison with a commercial hair-staining product and two textile dye formulations containing Basic Red 46. Analogous to the Sudan dyes, Basic Red 46 is an azo dye. However, some of the sample clean-up methodology utilised for the analysis of Sudan dyes in food prevents its successful detection. In contrast to the Sudan dyes, Basic Red 46 is a cation. Its cationic properties make it bind strongly to gel permeation columns and silica solid-phase extraction cartridges and prevent elution with standard eluents. This is the first report of Basic Red 46 in food. The structure elucidation of this compound as well as the disadvantages of analytical methods focusing on a narrow group of targeted analytes are discussed.
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Veazie, David R.; Brinson, L. Catherine
1996-01-01
Experimental and analytical methods were used to investigate the similarities and differences of the effects of physical aging on creep compliance of IM7/K3B composite loaded in tension and compression. Two matrix dominated loading modes, shear and transverse, were investigated for two load cases, tension and compression. The tests, run over a range of sub-glass transition temperatures, provided material constants, material master curves and aging related parameters. Comparing results from the short-term data indicated that although trends in the data with respect to aging time and aging temperature are similar, differences exist due to load direction and mode. The analytical model used for predicting long-term behavior using short-term data as input worked equally as well for the tension or compression loaded cases. Comparison of the loading modes indicated that the predictive model provided more accurate long term predictions for the shear mode as compared to the transverse mode. Parametric studies showed the usefulness of the predictive model as a tool for investigating long-term performance and compliance acceleration due to temperature.
An Analytical Comparison of the Acoustic Analogy and Kirchhoff Formulation for Moving Surfaces
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Farassat, F.
1997-01-01
The Lighthill acoustic analogy, as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, is compared with the Kirchhoff formulation for moving surfaces. A comparison of the two governing equations reveals that the main Kirchhoff advantage (namely nonlinear flow effects are included in the surface integration) is also available to the FW-H method if the integration surface used in the FW-H equation is not assumed impenetrable. The FW-H equation is analytically superior for aeroacoustics because it is based upon the conservation laws of fluid mechanics rather than the wave equation. This means that the FW-H equation is valid even if the integration surface is in the nonlinear region. This is demonstrated numerically in the paper. The Kirchhoff approach can lead to substantial errors if the integration surface is not positioned in the linear region. These errors may be hard to identify. Finally, new metrics based on the Sobolev norm are introduced which may be used to compare input data for both quadrupole noise calculations and Kirchhoff noise predictions.
Infinite product expansion of the Fokker-Planck equation with steady-state solution.
Martin, R J; Craster, R V; Kearney, M J
2015-07-08
We present an analytical technique for solving Fokker-Planck equations that have a steady-state solution by representing the solution as an infinite product rather than, as usual, an infinite sum. This method has many advantages: automatically ensuring positivity of the resulting approximation, and by design exactly matching both the short- and long-term behaviour. The efficacy of the technique is demonstrated via comparisons with computations of typical examples.
Eikonal solutions to optical model coupled-channel equations
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Khandelwal, Govind S.; Maung, Khin M.; Townsend, Lawrence W.; Wilson, John W.
1988-01-01
Methods of solution are presented for the Eikonal form of the nucleus-nucleus coupled-channel scattering amplitudes. Analytic solutions are obtained for the second-order optical potential for elastic scattering. A numerical comparison is made between the first and second order optical model solutions for elastic and inelastic scattering of H-1 and He-4 on C-12. The effects of bound-state excitations on total and reaction cross sections are also estimated.
Infinite product expansion of the Fokker–Planck equation with steady-state solution
Martin, R. J.; Craster, R. V.; Kearney, M. J.
2015-01-01
We present an analytical technique for solving Fokker–Planck equations that have a steady-state solution by representing the solution as an infinite product rather than, as usual, an infinite sum. This method has many advantages: automatically ensuring positivity of the resulting approximation, and by design exactly matching both the short- and long-term behaviour. The efficacy of the technique is demonstrated via comparisons with computations of typical examples. PMID:26346100
A dynamical-systems approach for computing ice-affected streamflow
Holtschlag, David J.
1996-01-01
A dynamical-systems approach was developed and evaluated for computing ice-affected streamflow. The approach provides for dynamic simulation and parameter estimation of site-specific equations relating ice effects to routinely measured environmental variables. Comparison indicates that results from the dynamical-systems approach ranked higher than results from 11 analytical methods previously investigated on the basis of accuracy and feasibility criteria. Additional research will likely lead to further improvements in the approach.
NASA Technical Reports Server (NTRS)
Griswold, M.; Roskam, J.
1980-01-01
An analytical method is presented for predicting lateral-directional aerodynamic characteristics of light twin engine propeller-driven airplanes. This method is applied to the Advanced Technology Light Twin Engine airplane. The calculated characteristics are correlated against full-scale wind tunnel data. The method predicts the sideslip derivatives fairly well, although angle of attack variations are not well predicted. Spoiler performance was predicted somewhat high but was still reasonable. The rudder derivatives were not well predicted, in particular the effect of angle of attack. The predicted dynamic derivatives could not be correlated due to lack of experimental data.
Blade loss transient dynamic analysis of turbomachinery
NASA Technical Reports Server (NTRS)
Stallone, M. J.; Gallardo, V.; Storace, A. F.; Bach, L. J.; Black, G.; Gaffney, E. F.
1982-01-01
This paper reports on work completed to develop an analytical method for predicting the transient non-linear response of a complete aircraft engine system due to the loss of a fan blade, and to validate the analysis by comparing the results against actual blade loss test data. The solution, which is based on the component element method, accounts for rotor-to-casing rubs, high damping and rapid deceleration rates associated with the blade loss event. A comparison of test results and predicted response show good agreement except for an initial overshoot spike not observed in test. The method is effective for analysis of large systems.
Methodological evaluation and comparison of five urinary albumin measurements.
Liu, Rui; Li, Gang; Cui, Xiao-Fan; Zhang, Dong-Ling; Yang, Qing-Hong; Mu, Xiao-Yan; Pan, Wen-Jie
2011-01-01
Microalbuminuria is an indicator of kidney damage and a risk factor for the progression kidney disease, cardiovascular disease, and so on. Therefore, accurate and precise measurement of urinary albumin is critical. However, there are no reference measurement procedures and reference materials for urinary albumin. Nephelometry, turbidimetry, colloidal gold method, radioimmunoassay, and chemiluminescence immunoassay were performed for methodological evaluation, based on imprecision test, recovery rate, linearity, haemoglobin interference rate, and verified reference interval. Then we tested 40 urine samples from diabetic patients by each method, and compared the result between assays. The results indicate that nephelometry is the method with best analytical performance among the five methods, with an average intraassay coefficient of variation (CV) of 2.6%, an average interassay CV of 1.7%, a mean recovery of 99.6%, a linearity of R=1.00 from 2 to 250 mg/l, and an interference rate of <10% at haemoglobin concentrations of <1.82 g/l. The correlation (r) between assays was from 0.701 to 0.982, and the Bland-Altman plots indicated each assay provided significantly different results from each other. Nephelometry is the clinical urinary albumin method with best analytical performance in our study. © 2011 Wiley-Liss, Inc.
A long-term validation of the modernised DC-ARC-OES solid-sample method.
Flórián, K; Hassler, J; Förster, O
2001-12-01
The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.
Jindal, Kriti; Narayanam, Mallikarjun; Singh, Saranjit
2015-04-10
In the present study, a novel analytical strategy was employed to study the occurrence of 40 drug residues belonging to different medicinal classes, e.g., antibiotics, β blockers, NSAIDs, antidiabetics, proton pump inhibitors, H2 receptor antagonists, antihypertensives, antihyperlipidemics, etc. in ground water samples collected from villages adjoining to S.A.S. Nagar, Punjab, India. The drug residues were extracted from the samples using solid-phase extraction, and LC-ESI-HRMS and LC-ESI-MS/MS were used for identification and quantitation of the analytes. Initially, qualifier and quantifier MRM transitions were classified for 40 targeted drugs, followed by development of LC-MS methods for the separation of all the drugs, which were divided into three categories to curtail overlapping of peaks. Overall identification was done through matching of retention times and MRM transitions; matching of intensity ratio of qualifier to quantifier transitions; comparison of base peak MS/MS profiles; and evaluation of isotopic abundances (wherever applicable). Final confirmation was carried out through comparison of accurate masses obtained from HRMS studies for both standard and targeted analytes in the samples. The application of the strategy allowed removal of false positives and helped in identification and quantitation of diclofenac in the ground water samples of four villages, and pitavastatin in a sample of one village. Copyright © 2015 Elsevier B.V. All rights reserved.
DNA Modification Study of Major Depressive Disorder: Beyond Locus-by-Locus Comparisons
Oh, Gabriel; Wang, Sun-Chong; Pal, Mrinal; Chen, Zheng Fei; Khare, Tarang; Tochigi, Mamoru; Ng, Catherine; Yang, Yeqing A.; Kwan, Andrew; Kaminsky, Zachary A.; Mill, Jonathan; Gunasinghe, Cerisse; Tackett, Jennifer L.; Gottesman, Irving I.; Willemsen, Gonneke; de Geus, Eco J.C.; Vink, Jacqueline M.; Slagboom, P. Eline; Wray, Naomi R.; Heath, Andrew C.; Montgomery, Grant W.; Turecki, Gustavo; Martin, Nicholas G.; Boomsma, Dorret I.; McGuffin, Peter; Kustra, Rafal; Petronis, Art
2014-01-01
Background Major depressive disorder (MDD) exhibits numerous clinical and molecular features that are consistent with putative epigenetic misregulation. Despite growing interest in epigenetic studies of psychiatric diseases, the methodologies guiding such studies have not been well defined. Methods We performed DNA modification analysis in white blood cells from monozygotic twins discordant for MDD, in brain prefrontal cortex, and germline (sperm) samples from affected individuals and control subjects (total N = 304) using 8.1K CpG island microarrays and fine mapping. In addition to the traditional locus-by-locus comparisons, we explored the potential of new analytical approaches in epigenomic studies. Results In the microarray experiment, we detected a number of nominally significant DNA modification differences in MDD and validated selected targets using bisulfite pyrosequencing. Some MDD epigenetic changes, however, overlapped across brain, blood, and sperm more often than expected by chance. We also demonstrated that stratification for disease severity and age may increase the statistical power of epimutation detection. Finally, a series of new analytical approaches, such as DNA modification networks and machine-learning algorithms using binary and quantitative depression phenotypes, provided additional insights on the epigenetic contributions to MDD. Conclusions Mapping epigenetic differences in MDD (and other psychiatric diseases) is a complex task. However, combining traditional and innovative analytical strategies may lead to identification of disease-specific etiopathogenic epimutations. PMID:25108803
NASA Technical Reports Server (NTRS)
Siegel, R.; Sparrow, E. M.
1960-01-01
The purpose of this note is to examine in a more precise way how the Nusselt numbers for turbulent heat transfer in both the fully developed and thermal entrance regions of a circular tube are affected by two different wall boundary conditions. The comparisons are made for: (a) Uniform wall temperature (UWT); and (b) uniform wall heat flux (UHF). Several papers which have been concerned with the turbulent thermal entrance region problem are given. 1 Although these analyses have all utilized an eigenvalue formulation for the thermal entrance region there were differences in the choices of eddy diffusivity expressions, velocity distributions, and methods for carrying out the numerical solutions. These differences were also found in the fully developed analyses. Hence when making a comparison of the analytical results for uniform wall temperature and uniform wall heat flux, it was not known if differences in the Nusselt numbers could be wholly attributed to the difference in wall boundary conditions, since all the analytical results were not obtained in a consistent way. To have results which could be directly compared, computations were carried out for the uniform wall temperature case, using the same eddy diffusivity, velocity distribution, and digital computer program employed for uniform wall heat flux. In addition, the previous work was extended to a lower Reynolds number range so that comparisons could be made over a wide range of both Reynolds and Prandtl numbers.
Cooley, Richard L.
1992-01-01
MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Saleh, Aljona; Stephanson, Niclas Nikolai; Granelli, Ingrid; Villén, Tomas; Beck, Olof
2012-11-15
In this study a rapid liquid chromatography-time-of-flight mass spectrometry method was developed, validated and applied in order to evaluate the potential of this technique for routine urine drug testing. Approximately 800 authentic patient samples were analyzed for amphetamines (amphetamine and methamphetamine), opiates (morphine, morphine-3-glucuronide, morphine-6-glucuronide, codeine and codeine-6-glucuronide) and buprenorphines (buprenorphine and buprenorphine-glucuronide) using immunochemical screening assays and mass spectrometry confirmation methods for comparison. The chromatographic application utilized a rapid gradient with high flow and a reversed phase column with 1.8 μm particles. Total analysis time was 4 min. The mass spectrometer operated with an electrospray interface in positive mode with a resolution power of >10,000 at m/z 956. The applied reporting limits were 100 ng/mL for amphetamines and opiates, and 5 ng/mL for buprenorphines, with lower limits of quantification were 2.8-41 ng/mL. Calibration curves showed a linear response with coefficients of correlation of 0.97-0.99. The intra- and interday imprecision in quantification at the reporting limits were <10% for all analytes but for buprenorphines <20%. Method validation data met performance criteria for a qualitative and quantitative method. The liquid chromatography-time-of-flight mass spectrometry method was found to be more selective than the immunochemical method by producing lower rates of false positives (0% for amphetamines and opiates; 3.2% for buprenorphines) and negatives (1.8% for amphetamines; 0.6% for opiates; 0% for buprenorphines). The overall agreement between the two screening methods was between 94.2 and 97.4%. Comparison of data with the confirmation (LC-MS) results for all individual 9 analytes showed that most deviating results were produced in samples with low levels of analytes. False negatives were mainly related to failure of detected peak to meet mass accuracy criteria (±20 mDa). False positives was related to presence of interfering peaks meeting mass accuracy and retention time criteria and occurred mainly at low levels. It is concluded that liquid chromatography-time-of-flight mass spectrometry has potential both as a complement and as replacement of immunochemical screening assays. Copyright © 2012 Elsevier B.V. All rights reserved.
Fan, Xinghua; Kubwabo, Cariton; Wu, Fang; Rasmussen, Pat E
2018-06-26
Background: Ingestion of house dust has been demonstrated to be an important exposure pathway to several contaminants in young children. These compounds include bisphenol A (BPA), alkylphenols (APs), and alkylphenol ethoxylates (APEOs). Analysis of these compounds in house dust is challenging because of the complex composition of the sample matrix. Objective: The objective was to develop a simple and sensitive method to measure BPA, APs, and APEOs in indoor house dust. Methods: An integrated method that involved solvent extraction using sonication, sample cleanup by solid-phase extraction, derivatization by 2,2,2-trifluoro- N -methyl- N -(trimethylsilyl)acetamide, and analysis by GC coupled with tandem MS was developed for the simultaneous determination of BPA, APs, and APEOs in NIST Standard Reference Material (SRM) 2585 (Organic contaminants in house dust) and in settled house dust samples. Results: Target analytes included BPA, 4- tert -octylphenol (OP), OP monoethoxylate, OP diethoxylate, 4- n -nonylphenol (4 n NP), 4 n NP monoethoxylate (4 n NP 1 EO), branched nonylphenol (NP), NP monoethoxylate, NP diethoxylate, NP triethoxylate, and NP tetraethoxylate. The method was sensitive, with method detection limits ranging from 0.05 to 5.1 μg/g, and average recoveries between 82 and 115%. All target analytes were detected in SRM 2585 and house dust except 4 n NP and 4 n NP 1 EO. Conclusions: The method is simple and fast, with high sensitivity and good reproducibility. It is applicable to the analysis of target analytes in similar matrixes, such as sediments, soil, and biosolids. Highlights: Values measured in SRM 2585 will be useful for future research in method development and method comparison.
NASA Astrophysics Data System (ADS)
Hemmateenejad, Bahram; Rezaei, Zahra; Khabnadideh, Soghra; Saffari, Maryam
2007-11-01
Carbamazepine (CBZ) undergoes enzyme biotransformation through epoxidation with the formation of its metabolite, carbamazepine-10,11-epoxide (CBZE). A simple chemometrics-assisted spectrophotometric method has been proposed for simultaneous determination of CBZ and CBZE in plasma. A liquid extraction procedure was operated to separate the analytes from plasma, and the UV absorbance spectra of the resultant solutions were subjected to partial least squares (PLS) regression. The optimum number of PLS latent variables was selected according to the PRESS values of leave-one-out cross-validation. A HPLC method was also employed for comparison. The respective mean recoveries for analysis of CBZ and CBZE in synthetic mixtures were 102.57 (±0.25)% and 103.00 (±0.09)% for PLS and 99.40 (±0.15)% and 102.20 (±0.02)%. The concentrations of CBZ and CBZE were also determined in five patients using the PLS and HPLC methods. The results showed that the data obtained by PLS were comparable with those obtained by HPLC method.
Song, Yuelin; Song, Qingqing; Li, Jun; Zheng, Jiao; Li, Chun; Zhang, Yuan; Zhang, Lingling; Jiang, Yong; Tu, Pengfei
2016-07-08
Direct analysis is of great importance to understand the real chemical profile of a given sample, notably biological materials, because either chemical degradation or diverse errors and uncertainties might be resulted from sophisticated protocols. In comparison with biofluids, it is still challenging for direct analysis of solid biological samples using high performance liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Herein, a new analytical platform was configured by online hyphenating pressurized liquid extraction (PLE), turbulent flow chromatography (TFC), and LC-MS/MS. A facile, but robust PLE module was constructed based on the phenomenon that noticeable back-pressure can be generated during rapid fluid passing through a narrow tube. TFC column that is advantageous at extracting low molecular analytes from rushing fluid was employed to link at the outlet of the PLE module to capture constituents-of-interest. An electronic 6-port/2-position valve was introduced between TFC column and LC-MS/MS to fragment each measurement into extraction and elution phases, whereas LC-MS/MS took the charge of analyte separation and monitoring. As a proof of concept, simultaneous determination of 24 endogenous substances including eighteen steroids, five eicosanoids, and one porphyrin in feces was carried out in this paper. Method validation assays demonstrated the analytical platform to be qualified for directly simultaneous measurement of diverse endogenous analytes in fecal matrices. Application of this integrated platform on homolog-focused profiling of feces is discussed in a companion paper. Copyright © 2016 Elsevier B.V. All rights reserved.
Performance Comparison of SDN Solutions for Switching Dedicated Long-Haul Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2016-01-01
We consider scenarios with two sites connected over a dedicated, long-haul connection that must quickly fail-over in response to degradations in host-to-host application performance. We present two methods for path fail-over using OpenFlowenabled switches: (a) a light-weight method that utilizes host scripts to monitor the application performance and dpctl API for switching, and (b) a generic method that uses two OpenDaylight (ODL) controllers and REST interfaces. The restoration dynamics of the application contain significant statistical variations due to the controllers, north interfaces and switches; in addition, the variety of vendor implementations further complicates the choice between different solutions. We presentmore » the impulse-response method to estimate the regressions of performance parameters, which enables a rigorous and objective comparison of different solutions. We describe testing results of the two methods, using TCP throughput and connection rtt as main parameters, over a testbed consisting of HP and Cisco switches connected over longhaul connections emulated in hardware by ANUE devices. The combination of analytical and experimental results demonstrates that dpctl method responds seconds faster than ODL method on average, while both methods restore TCP throughput.« less
Morphological, spectral and chromatography analysis and forensic comparison of PET fibers.
Farah, Shady; Tsach, Tsadok; Bentolila, Alfonso; Domb, Abraham J
2014-06-01
Poly(ethylene terephthalate) (PET) fiber analysis and comparison by spectral and polymer molecular weight determination was investigated. Plain fibers of PET, a common textile fiber and plastic material was chosen for this study. The fibers were analyzed for morphological (SEM and AFM), spectral (IR and NMR), thermal (DSC) and molecular weight (MS and GPC) differences. Molecular analysis of PET fibers by Gel Permeation Chromatography (GPC) allowed the comparison of fibers that could not be otherwise distinguished with high confidence. Plain PET fibers were dissolved in hexafluoroisopropanol (HFIP) and analyzed by GPC using hexafluoroisopropanol:chloroform 2:98 v/v as eluent. 14 PET fiber samples, collected from various commercial producers, were analyzed for polymer molecular weight by GPC. Distinct differences in the molecular weight of the different fiber samples were found which may have potential use in forensic fiber comparison. PET fibers with average molecular weights between about 20,000 and 70,000 g mol(-1) were determined using fiber concentrations in HFIP as low as 1 μg mL(-1). This GPC analytical method can be applied for exclusively distinguish between PET fibers using 1 μg of fiber. This method can be extended to forensic comparison of other synthetic fibers such as polyamides and acrylics. Copyright © 2014 Elsevier B.V. All rights reserved.
Bird, C B; Hoerner, R J; Restaino, L
2001-01-01
Four different food types along with environmental swabs were analyzed by the Reveal for E. coli O157:H7 test (Reveal) and the Bacteriological Analytical Manual (BAM) culture method for the presence of Escherichia coli O157:H7. Twenty-seven laboratories representing academia and private industry in the United States and Canada participated. Sample types were inoculated with E. coli O157:H7 at 2 different levels. Of the 1,095 samples and controls analyzed and confirmed, 459 were positive and 557 were negative by both methods. No statistical differences (p <0.05) were observed between the Reveal and BAM methods.
Research study on high energy radiation effect and environment solar cell degradation methods
NASA Technical Reports Server (NTRS)
Horne, W. E.; Wilkinson, M. C.
1974-01-01
The most detailed and comprehensively verified analytical model was used to evaluate the effects of simplifying assumptions on the accuracy of predictions made by the external damage coefficient method. It was found that the most serious discrepancies were present in heavily damaged cells, particularly proton damaged cells, in which a gradient in damage across the cell existed. In general, it was found that the current damage coefficient method tends to underestimate damage at high fluences. An exception to this rule was thick cover-slipped cells experiencing heavy degradation due to omnidirectional electrons. In such cases, the damage coefficient method overestimates the damage. Comparisons of degradation predictions made by the two methods and measured flight data confirmed the above findings.
NASA Technical Reports Server (NTRS)
Perkins, S. C., Jr.; Menhall, M. R.
1978-01-01
A correlation method to predict pressures induced on an infinite plate by a jet issuing from the plate into a subsonic free stream was developed. The complete method consists of an analytical method which models the blockage and entrainment properties of the jet and a correlation which accounts for the effects of separation. The method was developed for jet velocity ratios up to ten and for radial distances up to five diameters from the jet. Correlation curves and data comparisons are presented for jets issuing normally from a flat plate with velocity ratios one to twelve. Also, a list of references which deal with jets in a crossflow is presented.
Ultrafast Comparison of Personal Genomes via Precomputed Genome Fingerprints
Glusman, Gustavo; Mauldin, Denise E.; Hood, Leroy E.; Robinson, Max
2017-01-01
We present an ultrafast method for comparing personal genomes. We transform the standard genome representation (lists of variants relative to a reference) into “genome fingerprints” via locality sensitive hashing. The resulting genome fingerprints can be meaningfully compared even when the input data were obtained using different sequencing technologies, processed using different pipelines, represented in different data formats and relative to different reference versions. Furthermore, genome fingerprints are robust to up to 30% missing data. Because of their reduced size, computation on the genome fingerprints is fast and requires little memory. For example, we could compute all-against-all pairwise comparisons among the 2504 genomes in the 1000 Genomes data set in 67 s at high quality (21 μs per comparison, on a single processor), and achieved a lower quality approximation in just 11 s. Efficient computation enables scaling up a variety of important genome analyses, including quantifying relatedness, recognizing duplicative sequenced genomes in a set, population reconstruction, and many others. The original genome representation cannot be reconstructed from its fingerprint, effectively decoupling genome comparison from genome interpretation; the method thus has significant implications for privacy-preserving genome analytics. PMID:29018478
TSOS and TSOS-FK hybrid methods for modelling the propagation of seismic waves
NASA Astrophysics Data System (ADS)
Ma, Jian; Yang, Dinghui; Tong, Ping; Ma, Xiao
2018-05-01
We develop a new time-space optimized symplectic (TSOS) method for numerically solving elastic wave equations in heterogeneous isotropic media. We use the phase-preserving symplectic partitioned Runge-Kutta method to evaluate the time derivatives and optimized explicit finite-difference (FD) schemes to discretize the space derivatives. We introduce the averaged medium scheme into the TSOS method to further increase its capability of dealing with heterogeneous media and match the boundary-modified scheme for implementing free-surface boundary conditions and the auxiliary differential equation complex frequency-shifted perfectly matched layer (ADE CFS-PML) non-reflecting boundaries with the TSOS method. A comparison of the TSOS method with analytical solutions and standard FD schemes indicates that the waveform generated by the TSOS method is more similar to the analytic solution and has a smaller error than other FD methods, which illustrates the efficiency and accuracy of the TSOS method. Subsequently, we focus on the calculation of synthetic seismograms for teleseismic P- or S-waves entering and propagating in the local heterogeneous region of interest. To improve the computational efficiency, we successfully combine the TSOS method with the frequency-wavenumber (FK) method and apply the ADE CFS-PML to absorb the scattered waves caused by the regional heterogeneity. The TSOS-FK hybrid method is benchmarked against semi-analytical solutions provided by the FK method for a 1-D layered model. Several numerical experiments, including a vertical cross-section of the Chinese capital area crustal model, illustrate that the TSOS-FK hybrid method works well for modelling waves propagating in complex heterogeneous media and remains stable for long-time computation. These numerical examples also show that the TSOS-FK method can tackle the converted and scattered waves of the teleseismic plane waves caused by local heterogeneity. Thus, the TSOS and TSOS-FK methods proposed in this study present an essential tool for the joint inversion of local, regional, and teleseismic waveform data.
An Analytic Hierarchy Process for School Quality and Inspection: Model Development and Application
ERIC Educational Resources Information Center
Al Qubaisi, Amal; Badri, Masood; Mohaidat, Jihad; Al Dhaheri, Hamad; Yang, Guang; Al Rashedi, Asma; Greer, Kenneth
2016-01-01
Purpose: The purpose of this paper is to develop an analytic hierarchy planning-based framework to establish criteria weights and to develop a school performance system commonly called school inspections. Design/methodology/approach: The analytic hierarchy process (AHP) model uses pairwise comparisons and a measurement scale to generate the…
Experimental Method for Characterizing Electrical Steel Sheets in the Normal Direction
Hihat, Nabil; Lecointe, Jean Philippe; Duchesne, Stephane; Napieralska, Ewa; Belgrand, Thierry
2010-01-01
This paper proposes an experimental method to characterise magnetic laminations in the direction normal to the sheet plane. The principle, which is based on a static excitation to avoid planar eddy currents, is explained and specific test benches are proposed. Measurements of the flux density are made with a sensor moving in and out of an air-gap. A simple analytical model is derived in order to determine the permeability in the normal direction. The experimental results for grain oriented steel sheets are presented and a comparison is provided with values obtained from literature. PMID:22163394
Long period perturbations of earth satellite orbits. [Von Zeipel method and zonal harmonics
NASA Technical Reports Server (NTRS)
Wang, K. C.
1979-01-01
All the equations involved in extending the PS phi solution to include the long periodic and second order secular effects of the zonal harmonics are presented. Topics covered include DSphi elements and relations for their conconical transformation into the PS phi elements; the solution algorithm based on the Von Zeipel method; and the elimination of long periodic terms and analytical integration of primed variables. The equations were entered into the ASOP program, checked out, and verified. Comparisons with numerical integrations show the long period theory to be accurate within several meters after 800 revolutions.
Atmospheric Fluoroform (CHF3, HFC-23) at Cape Grim, Tasmania (1978-1995)
Oram, D. E. [University of East Anglia, Norwich, United Kingdom; Sturges, W. T. [University of East Anglia, Norwich, United Kingdom; Penkett, S. A. [University of East Anglia, Norwich, United Kingdom; McCulloch, A. [ICI Chemicals and Polymers, Ltd., Cheshire, United Kingdom; Fraser, P. J. [CRC for Southern Hemisphere Meteorology, Victoria, Australia
2000-10-01
The sampling and analytical methods are described more fully in Oram et al. (1998). In summary, air samples were taken from the archive of Cape Grim, Tasmania (41oS, 145oE) air samples collected from 1978 through 1995. Comparisons of CFC-11, CFC-12, CFC-113, CH3CCl3, and CH4 data between archive samples and corresponding in-situ samples for the same dates confirm that the archive samples are both representative and stable over time. Samples were analyzed by gas chromatography-mass spectrometry (GC-MS), using a KCl-passivated alumina PLOT column. Fluoroform was monitored on mass 69 (CF3+). The analytical precision (one standard deviation of the mean) for two or three replicate analyses was typically ± 1% of the mean measured value. The overall uncertainty of the observed data is ± 10%, taking into account uncertainties in the preparation of the primary standards, the purity of the fluoroform used to make the primary standards, as well as the analytical precision.
NASA Astrophysics Data System (ADS)
Qin, Ting; Liao, Congwei; Huang, Shengxiang; Yu, Tianbao; Deng, Lianwen
2018-01-01
An analytical drain current model based on the surface potential is proposed for amorphous indium gallium zinc oxide (a-InGaZnO) thin-film transistors (TFTs) with a synchronized symmetric dual-gate (DG) structure. Solving the electric field, surface potential (φS), and central potential (φ0) of the InGaZnO film using the Poisson equation with the Gaussian method and Lambert function is demonstrated in detail. The compact analytical model of current-voltage behavior, which consists of drift and diffusion components, is investigated by regional integration, and voltage-dependent effective mobility is taken into account. Comparison results demonstrate that the calculation results obtained using the derived models match well with the simulation results obtained using a technology computer-aided design (TCAD) tool. Furthermore, the proposed model is incorporated into SPICE simulations using Verilog-A to verify the feasibility of using DG InGaZnO TFTs for high-performance circuit designs.
Scattering from phase-separated vesicles. I. An analytical form factor for multiple static domains
Heberle, Frederick A.; Anghel, Vinicius N. P.; Katsaras, John
2015-08-18
This is the first in a series of studies considering elastic scattering from laterally heterogeneous lipid vesicles containing multiple domains. Unique among biophysical tools, small-angle neutron scattering can in principle give detailed information about the size, shape and spatial arrangement of domains. A general theory for scattering from laterally heterogeneous vesicles is presented, and the analytical form factor for static domains with arbitrary spatial configuration is derived, including a simplification for uniformly sized round domains. The validity of the model, including series truncation effects, is assessed by comparison with simulated data obtained from a Monte Carlo method. Several aspects ofmore » the analytical solution for scattering intensity are discussed in the context of small-angle neutron scattering data, including the effect of varying domain size and number, as well as solvent contrast. Finally, the analysis indicates that effects of domain formation are most pronounced when the vesicle's average scattering length density matches that of the surrounding solvent.« less
Shao, Wei; Mechefske, Chris K
2005-04-01
This paper describes an analytical model of finite cylindrical ducts with infinite flanges. This model is used to investigate the sound radiation characteristics of the gradient coil system of a magnetic resonance imaging (MRI) scanner. The sound field in the duct satisfies both the boundary conditions at the wall and at the open ends. The vibrating cylindrical wall of the duct is assumed to be the only sound source. Different acoustic conditions for the wall (rigid and absorptive) are used in the simulations. The wave reflection phenomenon at the open ends of the finite duct is described by general radiation impedance. The analytical model is validated by the comparison with its counterpart in a commercial code based on the boundary element method (BEM). The analytical model shows significant advantages over the BEM model with better numerical efficiency and a direct relation between the design parameters and the sound field inside the duct.
Hines, Erin P; Rayner, Jennifer L; Barbee, Randy; Moreland, Rae Ann; Valcour, Andre; Schmid, Judith E; Fenton, Suzanne E
2007-05-01
Breast milk is a primary source of nutrition that contains many endogenous compounds that may affect infant development. The goals of this study were to develop reliable assays for selected endogenous breast milk components and to compare levels of those in milk and serum collected from the same mother twice during lactation (2-7 weeks and 3-4 months). Reliable assays were developed for glucose, secretory IgA, interleukin-6, tumor necrosis factor-a, triglycerides, prolactin, and estradiol from participants in a US EPA study called Methods Advancement in Milk Analysis (MAMA). Fresh and frozen (-20 degrees C) milk samples were assayed to determine effects of storage on endogenous analytes. The source effect (serum vs milk) seen in all 7 analytes indicates that serum should not be used as a surrogate for milk in children's health studies. The authors propose to use these assays in studies to examine relationships between the levels of milk components and children's health.
NASA Astrophysics Data System (ADS)
Parrado, G.; Cañón, Y.; Peña, M.; Sierra, O.; Porras, A.; Alonso, D.; Herrera, D. C.; Orozco, J.
2016-07-01
The Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey has developed a technique for multi-elemental analysis of soil and plant matrices, based on Instrumental Neutron Activation Analysis (INAA) using the comparator method. In order to evaluate the analytical capabilities of the technique, the laboratory has been participating in inter-comparison tests organized by Wepal (Wageningen Evaluating Programs for Analytical Laboratories). In this work, the experimental procedure and results for the multi-elemental analysis of four soil and four plant samples during participation in the first round on 2015 of Wepal proficiency test are presented. Only elements with radioactive isotopes with medium and long half-lives have been evaluated, 15 elements for soils (As, Ce, Co, Cr, Cs, Fe, K, La, Na, Rb, Sb, Sc, Th, U and Zn) and 7 elements for plants (Br, Co, Cr, Fe, K, Na and Zn). The performance assessment by Wepal based on Z-score distributions showed that most results obtained |Z-scores| ≤ 3.
Dimensional transitions in thermodynamic properties of ideal Maxwell-Boltzmann gases
NASA Astrophysics Data System (ADS)
Aydin, Alhun; Sisman, Altug
2015-04-01
An ideal Maxwell-Boltzmann gas confined in various rectangular nanodomains is considered under quantum size effects. Thermodynamic quantities are calculated from their relations with the partition function, which consists of triple infinite summations over momentum states in each direction. To obtain analytical expressions, summations are converted to integrals for macrosystems by a continuum approximation, which fails at the nanoscale. To avoid both the numerical calculation of summations and the failure of their integral approximations at the nanoscale, a method which gives an analytical expression for a single particle partition function (SPPF) is proposed. It is shown that a dimensional transition in momentum space occurs at a certain magnitude of confinement. Therefore, to represent the SPPF by lower-dimensional analytical expressions becomes possible, rather than numerical calculation of summations. Considering rectangular domains with different aspect ratios, a comparison of the results of derived expressions with those of summation forms of the SPPF is made. It is shown that analytical expressions for the SPPF give very precise results with maximum relative errors of around 1%, 2% and 3% at exactly the transition point for single, double and triple transitions, respectively. Based on dimensional transitions, expressions for free energy, entropy, internal energy, chemical potential, heat capacity and pressure are given analytically valid for any scale.
High precision analytical description of the allowed β spectrum shape
NASA Astrophysics Data System (ADS)
Hayen, Leendert; Severijns, Nathal; Bodek, Kazimierz; Rozpedzik, Dagmara; Mougeot, Xavier
2018-01-01
A fully analytical description of the allowed β spectrum shape is given in view of ongoing and planned measurements. Its study forms an invaluable tool in the search for physics beyond the standard electroweak model and the weak magnetism recoil term. Contributions stemming from finite size corrections, mass effects, and radiative corrections are reviewed. Particular focus is placed on atomic and chemical effects, where the existing description is extended and analytically provided. The effects of QCD-induced recoil terms are discussed, and cross-checks were performed for different theoretical formalisms. Special attention was given to a comparison of the treatment of nuclear structure effects in different formalisms. Corrections were derived for both Fermi and Gamow-Teller transitions, and methods of analytical evaluation thoroughly discussed. In its integrated form, calculated f values were in agreement with the most precise numerical results within the aimed for precision. The need for an accurate evaluation of weak magnetism contributions was stressed, and the possible significance of the oft-neglected induced pseudoscalar interaction was noted. Together with improved atomic corrections, an analytical description was presented of the allowed β spectrum shape accurate to a few parts in 10-4 down to 1 keV for low to medium Z nuclei, thereby extending the work by previous authors by nearly an order of magnitude.
Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards
Smith, Justin D.
2013-01-01
This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874
ERIC Educational Resources Information Center
Hough, Susan L.; Hall, Bruce W.
The meta-analytic techniques of G. V. Glass (1976) and J. E. Hunter and F. L. Schmidt (1977) were compared through their application to three meta-analytic studies from education literature. The following hypotheses were explored: (1) the overall mean effect size would be larger in a Hunter-Schmidt meta-analysis (HSMA) than in a Glass…
Cai, Pei-Shan; Li, Dan; Chen, Jing; Xiong, Chao-Mei; Ruan, Jin-Lan
2015-04-15
Two thin-film microextractions (TFME), octadecylsilane (ODS)-polyacrylonitrile (PAN)-TFME and polar enhanced phase (PEP)-PAN-TFME have been proposed for the analysis of bisphenol-A, diethylstilbestrol and 17β-estradiol in aqueous tea extract and environmental water samples followed by high performance liquid chromatography-ultraviolet detection. Both thin-films were prepared by spraying. The influencing factors including pH, extraction time, desorption solvent, desorption volume, desorption time, ion strength and reusability were investigated. Under the optimal conditions, the two TFME methods are similar in terms of the analytical performance evaluated by standard addition method. The limits of detection for three estrogens in environmental water and aqueous tea extract matrix ranged from 1.3 to 1.6 and 2.8 to 7.1 ng mL(-1) by the two TFME methods, respectively. Both approaches were applied for the analysis of analytes in real aqueous tea extract and environmental water samples, presenting satisfactory recoveries ranged from 87.3% to 109.4% for the spiked samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
A mean curvature model for capillary flows in asymmetric containers and conduits
NASA Astrophysics Data System (ADS)
Chen, Yongkang; Tavan, Noël; Weislogel, Mark M.
2012-08-01
Capillarity-driven flows resulting from critical geometric wetting criterion are observed to yield significant shifts of the bulk fluid from one side of the container to the other during "zero gravity" experiments. For wetting fluids, such bulk shift flows consist of advancing and receding menisci sometimes separated by secondary capillary flows such as rivulet-like flows along gaps. Here we study the mean curvature of an advancing meniscus in hopes of approximating a critical boundary condition for fluid dynamics solutions. It is found that the bulk shift flows behave as if the bulk menisci are either "connected" or "disconnected." For the connected case, an analytic method is developed to calculate the mean curvature of the advancing meniscus in an asymptotic sense. In contrast, for the disconnected case the method to calculate the mean curvature of the advancing and receding menisci uses a well-established procedure. Both disconnected and connected bulk shifts can occur as the first tier flow of more complex compound capillary flows. Preliminary comparisons between the analytic method and the results of drop tower experiments are encouraging.
On the critical forcing amplitude of forced nonlinear oscillators
NASA Astrophysics Data System (ADS)
Febbo, Mariano; Ji, Jinchen C.
2013-12-01
The steady-state response of forced single degree-of-freedom weakly nonlinear oscillators under primary resonance conditions can exhibit saddle-node bifurcations, jump and hysteresis phenomena, if the amplitude of the excitation exceeds a certain value. This critical value of excitation amplitude or critical forcing amplitude plays an important role in determining the occurrence of saddle-node bifurcations in the frequency-response curve. This work develops an alternative method to determine the critical forcing amplitude for single degree-of-freedom nonlinear oscillators. Based on Lagrange multipliers approach, the proposed method considers the calculation of the critical forcing amplitude as an optimization problem with constraints that are imposed by the existence of locations of vertical tangency. In comparison with the Gröbner basis method, the proposed approach is more straightforward and thus easy to apply for finding the critical forcing amplitude both analytically and numerically. Three examples are given to confirm the validity of the theoretical predictions. The first two present the analytical form for the critical forcing amplitude and the third one is an example of a numerically computed solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.
The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less
A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.
Bartzsch, Stefan; Oelfke, Uwe
2013-11-01
The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.
Goode, D.J.; Konikow, Leonard F.
1989-01-01
The U.S. Geological Survey computer model of two-dimensional solute transport and dispersion in ground water (Konikow and Bredehoeft, 1978) has been modified to incorporate the following types of chemical reactions: (1) first-order irreversible rate-reaction, such as radioactive decay; (2) reversible equilibrium-controlled sorption with linear, Freundlich, or Langmuir isotherms; and (3) reversible equilibrium-controlled ion exchange for monovalent or divalent ions. Numerical procedures are developed to incorporate these processes in the general solution scheme that uses method-of- characteristics with particle tracking for advection and finite-difference methods for dispersion. The first type of reaction is accounted for by an exponential decay term applied directly to the particle concentration. The second and third types of reactions are incorporated through a retardation factor, which is a function of concentration for nonlinear cases. The model is evaluated and verified by comparison with analytical solutions for linear sorption and decay, and by comparison with other numerical solutions for nonlinear sorption and ion exchange.
Investigation of the validity of radiosity for sound-field prediction in cubic rooms
NASA Astrophysics Data System (ADS)
Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian
2004-12-01
This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms. .
Investigation of the validity of radiosity for sound-field prediction in cubic rooms.
Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian
2004-12-01
This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms.
Fang, Ching; Chung, Yu-Lin; Liu, Ju-Tsung; Lin, Cheng-Huang
2002-02-18
Because of the increasing use of 3,4-methylenedioxymethamphetamine (3,4-MDMA), a rapid and sensitive analytical technique is required for its detection and determination. Using nonaqueous capillary electrophoresis/fluorescence spectroscopy (NACE/FS) detection, it is possible to determine this drug at the level 0.5 ppm without any pre-treatment in less than 5 min. After liquid-liquid extraction, the sample can be condensed and a detection limit of 3,4-MDMA in urine of 50 ppb (S/N = 3) can be achieved. The precision of the method was evaluated by measuring the repeatability and intermediate precision of migration time and the corrected peak height by comparison with a 3,4-MDMA-D5 internal standard. With the conventional GC/MS method, it is necessary to derivatize the 3,4-MDMA before injection and the GC migration time also is in excess of 20 min. Therefore, NACE/FS represents a good complementary method to GC/MS for use in forensic analysis.
Measured effects of coolant injection on the performance of a film cooled turbine
NASA Technical Reports Server (NTRS)
Mcdonel, J. D.; Eiswerth, J. E.
1977-01-01
Tests have been conducted on a 20-inch diameter single-stage air-cooled turbine designed to evaluate the effects of film cooling air on turbine aerodynamic performance. The present paper reports the results of five test configurations, including two different cooling designs and three combinations of cooled and solid airfoils. A comparison is made of the experimental results with a previously published analytical method of evaluating coolant injection effects on turbine performance.
Nonperturbative methods in HZE ion transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.
Klassen, Tara L.; von Rüden, Eva-Lotta; Drabek, Janice; Noebels, Jeffrey L.; Goldman, Alica M.
2013-01-01
Genetic testing and research have increased the demand for high-quality DNA that has traditionally been obtained by venipuncture. However, venous blood collection may prove difficult in special populations and when large-scale specimen collection or exchange is prerequisite for international collaborative investigations. Guthrie/FTA card–based blood spots, buccal scrapes, and finger nail clippings are DNA-containing specimens that are uniquely accessible and thus attractive as alternative tissue sources (ATS). The literature details a variety of protocols for extraction of nucleic acids from a singular ATS type, but their utility has not been systematically analyzed in comparison with conventional sources such as venous blood. Additionally, the efficacy of each protocol is often equated with the overall nucleic acid yield but not with the analytical performance of the DNA during mutation detection. Together with a critical in-depth literature review of published extraction methods, we developed and evaluated an all-inclusive approach for serial, systematic, and direct comparison of DNA utility from multiple biological samples. Our results point to the often underappreciated value of these alternative tissue sources and highlight ways to maximize the ATS-derived DNA for optimal quantity, quality, and utility as a function of extraction method. Our comparative analysis clarifies the value of ATS in genomic analysis projects for population-based screening, diagnostics, molecular autopsy, medico-legal investigations, or multi-organ surveys of suspected mosaicisms. PMID:22796560
Dai, Sheng-Yun; Xu, Bing; Shi, Xin-Yuan; Xu, Xiang; Sun, Ying-Qiang; Qiao, Yan-Jiang
2017-03-01
This study is aimed to propose a continual improvement strategy based on quality by design (QbD). An ultra high performance liquid chromatography (UPLC) method was developed to accomplish the method transformation from HPLC to UPLC of Panax notogineng saponins (PNS) and achieve the continual improvement of PNS based on QbD, for example. Plackett-Burman screening design and Box-Behnken optimization design were employed to further understand the relationship between the critical method parameters (CMPs) and critical method attributes (CMAs). And then the Bayesian design space was built. The separation degree of the critical peaks (ginsenoside Rg₁ and ginsenoside Re) was over 2.0 and the analysis time was less than 17 min by a method chosen from the design space with 20% of the initial concentration of the acetonitrile, 10 min of the isocratic time and 6%•min⁻¹ of the gradient slope. At last, the optimum method was validated by accuracy profile. Based on the same analytical target profile (ATP), the comparison of HPLC and UPLC including chromatograph method, CMA identification, CMP-CMA model and system suitability test (SST) indicated that the UPLC method could shorten the analysis time, improve the critical separation and satisfy the requirement of the SST. In all, HPLC method could be replaced by UPLC for the quantity analysis of PNS. Copyright© by the Chinese Pharmaceutical Association.
KEY COMPARISON: Final report on CCQM-K69 key comparison: Testosterone glucuronide in human urine
NASA Astrophysics Data System (ADS)
Liu, Fong-Ha; Mackay, Lindsey; Murby, John
2010-01-01
The CCQM-K69 key comparison of testosterone glucuronide in human urine was organized under the auspices of the CCQM Organic Analysis Working Group (OAWG). The National Measurement Institute Australia (NMIA) acted as the coordinating laboratory for the comparison. The samples distributed for the key comparison were prepared at NMIA with funding from the World Anti-Doping Agency (WADA). WADA granted the approval for this material to be used for the intercomparison provided the distribution and handling of the material were strictly controlled. Three national metrology institutes (NMIs)/designated institutes (DIs) developed reference methods and submitted data for the key comparison along with two other laboratories who participated in the parallel pilot study. A good selection of analytical methods and sample workup procedures was displayed in the results submitted considering the complexities of the matrix involved. The comparability of measurement results was successfully demonstrated by the participating NMIs. Only the key comparison data were used to estimate the key comparison reference value (KCRV), using the arithmetic mean approach. The reported expanded uncertainties for results ranged from 3.7% to 6.7% at the 95% level of confidence and all results agreed within the expanded uncertainty of the KCRV. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).
Interdisciplinary research on patient-provider communication: a cross-method comparison.
Chou, Wen-Ying Sylvia; Han, Paul; Pilsner, Alison; Coa, Kisha; Greenberg, Larrie; Blatt, Benjamin
2011-01-01
Patient-provider communication, a key aspect of healthcare delivery, has been assessed through multiple methods for purposes of research, education, and quality control. Common techniques include satisfaction ratings and quantitatively- and qualitatively-oriented direct observations. Identifying the strengths and weaknesses of different approaches is critically important in determining the appropriate assessment method for a specific research or practical goal. Analyzing ten videotaped simulated encounters between medical students and Standardized Patients (SPs), this study compared three existing assessment methods through the same data set. Methods included: (1) dichotomized SP ratings on students' communication skills; (2) Roter Interaction Analysis System (RIAS) analysis; and (3) inductive discourse analysis informed by sociolinguistic theories. The large dichotomous contrast between good and poor ratings in (1) was not evidenced in any of the other methods. Following a discussion of strengths and weaknesses of each approach, we pilot-tested a combined assessment done by coders blinded to results of (1)-(3). This type of integrative approach has the potential of adding a quantifiable dimension to qualitative, discourse-based observations. Subjecting the same data set to separate analytic methods provides an excellent opportunity for methodological comparisons with the goal of informing future assessment of clinical encounters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swarin, S.J.; Loo, J.F.; Chladek, E.
1992-01-01
Analytical methods for determining individual aldehyde, ketone, and alcohol emissions from gasoline-, methanol-, and variable-fueled vehicles are described. These methods were used in the Auto/Oil Air Quality Improvement Research Program to provide emission data for comparison of individual reformulated fuels, individual vehicles, and for air modeling studies. The emission samples are collected in impingers which contain either 2,4-dinitrophenylhydrazine solution for the aldehydes and ketones or deionized water for the alcohols. Subsequent analyses by liquid chromatography for the aldehydes and ketones and gas chromatography for the alcohols utilized auto injectors and computerized data systems which permit high sample throughput with minimalmore » operator intervention. The quality control procedures developed and interlaboratory comparisons conducted as part of the program are also described. (Copyright (c) 1992 Society of Automotive Engineers, Inc.)« less
Wiklund, Kristin; Olivera, Gustavo H; Brahme, Anders; Lind, Bengt K
2008-07-01
To speed up dose calculation, an analytical pencil-beam method has been developed to calculate the mean radial dose distributions due to secondary electrons that are set in motion by light ions in water. For comparison, radial dose profiles calculated using a Monte Carlo technique have also been determined. An accurate comparison of the resulting radial dose profiles of the Bragg peak for (1)H(+), (4)He(2+) and (6)Li(3+) ions has been performed. The double differential cross sections for secondary electron production were calculated using the continuous distorted wave-eikonal initial state method (CDW-EIS). For the secondary electrons that are generated, the radial dose distribution for the analytical case is based on the generalized Gaussian pencil-beam method and the central axis depth-dose distributions are calculated using the Monte Carlo code PENELOPE. In the Monte Carlo case, the PENELOPE code was used to calculate the whole radial dose profile based on CDW data. The present pencil-beam and Monte Carlo calculations agree well at all radii. A radial dose profile that is shallower at small radii and steeper at large radii than the conventional 1/r(2) is clearly seen with both the Monte Carlo and pencil-beam methods. As expected, since the projectile velocities are the same, the dose profiles of Bragg-peak ions of 0.5 MeV (1)H(+), 2 MeV (4)He(2+) and 3 MeV (6)Li(3+) are almost the same, with about 30% more delta electrons in the sub keV range from (4)He(2+)and (6)Li(3+) compared to (1)H(+). A similar behavior is also seen for 1 MeV (1)H(+), 4 MeV (4)He(2+) and 6 MeV (6)Li(3+), all classically expected to have the same secondary electron cross sections. The results are promising and indicate a fast and accurate way of calculating the mean radial dose profile.
Beigi, Manije; Afarande, Fatemeh; Ghiasi, Hosein
2016-01-01
The aim of this study was to compare two bunkers designed by only protocols recommendations and Monte Carlo (MC) based upon data derived for an 18 MV Varian 2100Clinac accelerator. High energy radiation therapy is associated with fast and thermal photoneutrons. Adequate shielding against the contaminant neutron has been recommended by IAEA and NCRP new protocols. The latest protocols released by the IAEA (safety report No. 47) and NCRP report No. 151 were used for the bunker designing calculations. MC method based upon data was also derived. Two bunkers using protocols and MC upon data were designed and discussed. From designed door's thickness, the door designed by the MC simulation and Wu-McGinley analytical method was closer in both BPE and lead thickness. In the case of the primary and secondary barriers, MC simulation resulted in 440.11 mm for the ordinary concrete, total concrete thickness of 1709 mm was required. Calculating the same parameters value with the recommended analytical methods resulted in 1762 mm for the required thickness using 445 mm as recommended by TVL for the concrete. Additionally, for the secondary barrier the thickness of 752.05 mm was obtained. Our results showed MC simulation and the followed protocols recommendations in dose calculation are in good agreement in the radiation contamination dose calculation. Difference between the two analytical and MC simulation methods revealed that the application of only one method for the bunker design may lead to underestimation or overestimation in dose and shielding calculations.
NASA Technical Reports Server (NTRS)
Valle, Gerard D.; Selig, Molly; Litteken, Doug; Oliveras, Ovidio
2012-01-01
This paper documents the integration of a large hatch penetration into an inflatable module. This paper also documents the comparison of analytical load predictions with measured results utilizing strain measurement. Strain was measured by utilizing photogrammetric measurement and through measurement obtained from strain gages mounted to selected clevises that interface with the structural webbings. Bench testing showed good correlation between strain measurement obtained from an extensometer and photogrammetric measurement especially after the fabric has transitioned through the low load/high strain region of the curve. Test results for the full-scale torus showed mixed results in the lower load and thus lower strain regions. Overall strain, and thus load, measured by strain gages and photogrammetry tracked fairly well with analytical predictions. Methods and areas of improvements are discussed.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
Nascimento, Paloma Andrade Martins; Barsanelli, Paulo Lopes; Rebellato, Ana Paula; Pallone, Juliana Azevedo Lima; Colnago, Luiz Alberto; Pereira, Fabíola Manhas Verbi
2017-03-01
This study shows the use of time-domain (TD)-NMR transverse relaxation (T2) data and chemometrics in the nondestructive determination of fat content for powdered food samples such as commercial dried milk products. Most proposed NMR spectroscopy methods for measuring fat content correlate free induction decay or echo intensities with the sample's mass. The need for the sample's mass limits the analytical frequency of NMR determination, because weighing the samples is an additional step in this procedure. Therefore, the method proposed here is based on a multivariate model of T2 decay, measured with Carr-Purcell-Meiboom-Gill pulse sequence and reference values of fat content. The TD-NMR spectroscopy method shows high correlation (r = 0.95) with the lipid content, determined by the standard extraction method of Bligh and Dyer. For comparison, fat content determination was also performed using a multivariate model with near-IR (NIR) spectroscopy, which is also a nondestructive method. The advantages of the proposed TD-NMR method are that it (1) minimizes toxic residue generation, (2) performs measurements with high analytical frequency (a few seconds per analysis), and (3) does not require sample preparation (such as pelleting, needed for NIR spectroscopy analyses) or weighing the samples.
Duodu, Godfred Odame; Goonetilleke, Ashantha; Allen, Charlotte; Ayoko, Godwin A
2015-10-22
Wet-milling protocol was employed to produce pressed powder tablets with excellent cohesion and homogeneity suitable for laser ablation (LA) analysis of volatile and refractive elements in sediment. The influence of sample preparation on analytical performance was also investigated, including sample homogeneity, accuracy and limit of detection. Milling in volatile solvent for 40 min ensured sample is well mixed and could reasonably recover both volatile (Hg) and refractive (Zr) elements. With the exception of Cr (-52%) and Nb (+26%) major, minor and trace elements in STSD-1 and MESS-3 could be analysed within ±20% of the certified values. Comparison of the method with total digestion method using HF was tested by analysing 10 different sediment samples. The laser method recovers significantly higher amounts of analytes such as Ag, Cd, Sn and Sn than the total digestion method making it a more robust method for elements across the periodic table. LA-ICP-MS also eliminates the interferences from chemical reagents as well as the health and safety risks associated with digestion processes. Therefore, it can be considered as an enhanced method for the analysis of heterogeneous matrices such as river sediments. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik
Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climatemore » science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less
Comparison of Gluten Extraction Protocols Assessed by LC-MS/MS Analysis.
Fallahbaghery, Azadeh; Zou, Wei; Byrne, Keren; Howitt, Crispin A; Colgrave, Michelle L
2017-04-05
The efficiency of gluten extraction is of critical importance to the results derived from any analytical method for gluten detection and quantitation, whether it employs reagent-based technology (antibodies) or analytical instrumentation (mass spectrometry). If the target proteins are not efficiently extracted, the end result will be an under-estimation in the gluten content posing a health risk to people affected by conditions such as celiac disease (CD) and nonceliac gluten sensitivity (NCGS). Five different extraction protocols were investigated using LC-MRM-MS for their ability to efficiently and reproducibly extract gluten. The rapid and simple "IPA/DTT" protocol and related "two-step" protocol were enriched for gluten proteins, 55/86% (trypsin/chymotrypsin) and 41/68% of all protein identifications, respectively, with both methods showing high reproducibility (CV < 15%). When using multistep protocols, it was critical to examine all fractions, as coextraction of proteins occurred across fractions, with significant levels of proteins existing in unexpected fractions and not all proteins within a particular gluten class behaving the same.
An experimental and analytical investigation on the response of GR/EP composite I-frames
NASA Technical Reports Server (NTRS)
Moas, E., Jr.; Boitnott, R. L.; Griffin, O. H., Jr.
1991-01-01
Six-foot diameter, semicircular graphite/epoxy specimens representative of generic aircraft frames were loaded quasi-statically to determine their load response and failure mechanisms for large deflections that occur in an airplane crash. These frame-skin specimens consisted of a cylindrical skin section cocured with a semicircular I-frame. Various frame laminate stacking sequences and geometries were evaluated by statically loading the specimen until multiple failures occurred. Two analytical methods were compared for modeling the frame-skin specimens: a two-dimensional branched-shell finite element analysis and a one-dimensional, closed-form, curved beam solution derived using an energy method. Excellent correlation was obtained between experimental results and the finite element predictions of the linear response of the frames prior to the initial failure. The beam solution was used for rapid parameter and design studies, and was found to be stiff in comparison with the finite element analysis. The specimens were found to be useful for evaluating composite frame designs.
Mess, Aylin; Enthaler, Bernd; Fischer, Markus; Rapp, Claudius; Pruns, Julia K; Vietzke, Jens-Peter
2013-01-15
Identification of endogenous skin surface compounds is an intriguing challenge in comparative skin investigations. Notably, this short communication is focused on the analysis of small molecules, e.g. natural moisturizing factor (NMF) components and lipids, using a novel sampling method with DIP-it samplers for non-invasive examination of the human skin surface. As a result, extraction of analytes directly from the skin surface by use of various solvents can be replaced with the mentioned procedure. Screening of measureable compounds is achieved by direct analysis in real time mass spectrometry (DART-MS) without further sample preparation. Results are supplemented by dissolving analytes from the DIP-it samplers by use of different solvents, and subsequent matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) measurements. An interesting comparison of the mentioned MS techniques for determination of skin surface compounds in the mass range of 50-1000 Da is presented. Copyright © 2012 Elsevier B.V. All rights reserved.
McClements, Jake; McClements, David Julian
2016-06-10
There has been a rapid increase in the fabrication of various kinds of edible nanoparticles for oral delivery of bioactive agents, such as those constructed from proteins, carbohydrates, lipids, and/or minerals. It is currently difficult to compare the relative advantages and disadvantages of different kinds of nanoparticle-based delivery systems because researchers use different analytical instruments and protocols to characterize them. In this paper, we briefly review the various analytical methods available for characterizing the properties of edible nanoparticles, such as composition, morphology, size, charge, physical state, and stability. This information is then used to propose a number of standardized protocols for characterizing nanoparticle properties, for evaluating their stability to environmental stresses, and for predicting their biological fate. Implementation of these protocols would facilitate comparison of the performance of nanoparticles under standardized conditions, which would facilitate the rational selection of nanoparticle-based delivery systems for different applications in the food, health care, and pharmaceutical industries.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Development and application of a gradient method for solving differential games
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Montgomery, R. C.
1971-01-01
A technique for solving n-dimensional games is developed and applied to two pursuit-evasion games. The first is a two-dimensional game similar to the homicidal chauffeur but modified to resemble an airplane-helicopter engagement. The second is a five-dimensional game of two airplanes at constant altitude and with thrust and turning controls. The performance function to be optimized by the pursuer and evader was the distance between the evader and a given target point in front of the pursuer. The analytic solution to the first game reveals that both unique and nonunique solutions exist. A comparison between the gradient results and the analytic solution shows a dependence on the nominal controls in regions where nonunique solutions exist. In the unique solution region, the results from the two methods agree closely. The results for the five-dimensional two-airplane game are also shown to be dependent on the nominal controls selected and indicate that initial conditions are in a region of nonunique solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ying, E-mail: liu.ying.48r@st.kyoto-u.ac.jp; Imashuku, Susumu; Sasaki, Nobuharu
In this study, a portable total reflection x-ray fluorescence (TXRF) spectrometer was used to analyze unknown laboratory hazards that precipitated on exterior surfaces of cooling pipes and fume hood pipes in chemical laboratories. With the aim to examine the accuracy of TXRF analysis for the determination of elemental composition, analytical results were compared with those of wavelength-dispersive x-ray fluorescence spectrometry, scanning electron microscope and energy-dispersive x-ray spectrometry, energy-dispersive x-ray fluorescence spectrometry, inductively coupled plasma atomic emission spectrometry, x-ray diffraction spectrometry (XRD), and x-ray photoelectron spectroscopy (XPS). Detailed comparison of data confirmed that the TXRF method itself was not sufficient tomore » determine all the elements (Z > 11) contained in the samples. In addition, results suggest that XRD should be combined with XPS in order to accurately determine compound composition. This study demonstrates that at least two analytical methods should be used in order to analyze the composition of unknown real samples.« less
NASA Astrophysics Data System (ADS)
Mohamed, Nurul Huda; Ahmat, Norhayati; Mohamed, Nurul Akmal; Razmi, Syazwani Che; Mohamed, Nurul Farihan
2017-05-01
This research is a case study to identify the best criteria that a person should have as the leader of Malaysia School Youth Cadet Corps (Kadet Remaja Sekolah (KRS)) at SMK Ahmad Boestamam, Sitiawan in order to select the most appropriate person to hold the position. The approach used in this study is Analytical Hierarchy Process (AHP) which include pairwise comparison to compare the criteria and also the candidates. There are four criteria namely charisma, interpersonal communication, personality and physical. Four candidates (1, 2, 3 and 4) are being considered in this study. Purposive sampling and questionnaires are used as instruments to obtain the data which are then analyzed by using the AHP method. The final output indicates that Candidate 1 has the highest score, followed by Candidate 2, Candidate 4 and Candidate 3. It shows that this method is very helpful in the multi-criteria decision making when there are several options available.
Calculation of ground vibration spectra from heavy military vehicles
NASA Astrophysics Data System (ADS)
Krylov, V. V.; Pickup, S.; McNuff, J.
2010-07-01
The demand for reliable autonomous systems capable to detect and identify heavy military vehicles becomes an important issue for UN peacekeeping forces in the current delicate political climate. A promising method of detection and identification is the one using the information extracted from ground vibration spectra generated by heavy military vehicles, often termed as their seismic signatures. This paper presents the results of the theoretical investigation of ground vibration spectra generated by heavy military vehicles, such as tanks and armed personnel carriers. A simple quarter car model is considered to identify the resulting dynamic forces applied from a vehicle to the ground. Then the obtained analytical expressions for vehicle dynamic forces are used for calculations of generated ground vibrations, predominantly Rayleigh surface waves, using Green's function method. A comparison of the obtained theoretical results with the published experimental data shows that analytical techniques based on the simplified quarter car vehicle model are capable of producing ground vibration spectra of heavy military vehicles that reproduce basic properties of experimental spectra.
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.; Paulin, C.E.
1995-01-01
The U.S. Geological Survey used four programs in 1991 to provide external quality assurance for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). An intersite-comparison program was used to evaluate onsite pH and specific-conductance determinations. The effects of routine sample handling, processing, and shipping of wet-deposition samples on analyte determinations and an estimated precision of analyte values and concentrations were evaluated in the blind-audit program. Differences between analytical results and an estimate of the analytical precision of four laboratories routinely measuring wet deposition were determined by an interlaboratory-comparison program. Overall precision estimates for the precipitation-monitoring system were determined for selected sites by a collocated-sampler program. Results of the intersite-comparison program indicated that 93 and 86 percent of the site operators met the NADP/NTN accuracy goal for pH determinations during the two intersite-comparison studies completed during 1991. The results also indicated that 96 and 97 percent of the site operators met the NADP/NTN accuracy goal for specific-conductance determinations during the two 1991 studies. The effects of routine sample handling, processing, and shipping, determined in the blind-audit program indicated significant positive bias (a=.O 1) for calcium, magnesium, sodium, potassium, chloride, nitrate, and sulfate. Significant negative bias (or=.01) was determined for hydrogen ion and specific conductance. Only ammonium determinations were not biased. A Kruskal-Wallis test indicated that there were no significant (*3t=.01) differences in analytical results from the four laboratories participating in the interlaboratory-comparison program. Results from the collocated-sampler program indicated the median relative error for cation concentration and deposition exceeded eight percent at most sites, whereas the median relative error for sample volume, sulfate, and nitrate concentration at all sites was less than four percent. The median relative error for hydrogen ion concentration and deposition ranged from 4.6 to 18.3 percent at the four sites and as indicated in previous years of the study, was inversely proportional to the acidity of the precipitation at a given site. Overall, collocated-sampling error typically was five times that of laboratory error estimates for most analytes.
Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.
2016-01-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003
Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H
2017-05-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.
Liu, Xiang; Makeyev, Oleksandr; Besio, Walter
2011-01-01
We have simulated a four-layer concentric spherical head model. We calculated the spline and tripolar Laplacian estimates and compared them to the analytical Laplacian on the spherical surface. In the simulations we used five different dipole groups and two electrode configurations. The comparison shows that the tripolar Laplacian has higher correlation coefficient to the analytical Laplacian in the electrode configurations tested (19, standard 10/20 locations and 64 electrodes).
2012-01-01
Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples. PMID:23050842
Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria
2012-10-10
Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples.