Sample records for comparison method results

  1. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  2. Sport fishing: a comparison of three indirect methods for estimating benefits.

    Treesearch

    Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight

    1988-01-01

    Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...

  3. Mode-Stirred Method Implementation for HIRF Susceptibility Testing and Results Comparison with Anechoic Method

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.

    2001-01-01

    This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.

  4. Graphic representation of data resulting from measurement comparison trials in cataract and refractive surgery.

    PubMed

    Krummenauer, Frank; Storkebaum, Kristin; Dick, H Burkhard

    2003-01-01

    The evaluation of new diagnostic measurement devices allows intraindividual comparison with an established standard method. However, reports in journal articles often omit the adequate incorporation of the intraindividual design into the graphic representation. This article illustrates the drawbacks and the possible erroneous conclusions caused by this misleading practice in terms of recent method comparison data resulting from axial length measurement in 220 consecutive patients by both applanation ultrasound and partial coherence interferometry. Graphic representation of such method comparison data should be based on boxplots for intraindividual differences or on Bland-Altman plots. Otherwise, severe deviations between the measurement devices could be erroneously ignored and false-positive conclusions on the concordance of the instruments could result. Graphic representation of method comparison data should sensitively incorporate the underlying study design for intraindividual comparison.

  5. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    NASA Astrophysics Data System (ADS)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  6. An analysis of methods for the selection of trees from wild stands

    Treesearch

    F. Thomas Ledig

    1976-01-01

    The commonly applied comparison-tree method of selection is analyzed as a form of within-family selection. If environmental variarion among comparison- and select-tree groups, c2, is a relatively small proportion (17 percent or less with 5 comparison trees) of the total variation, comparison-tree selection will result in less...

  7. COMPARISON OF METHODS FOR MEASURING CONCENTRATIONS OF SEMIVOLATILE PARTICULATE MATTER

    EPA Science Inventory

    The paper gives results of a comparison of methods for measuring concentrations of semivolatile particulate matter (PM) from indoor-environment, small, combustion sources. Particle concentration measurements were compared for methods using filters and a small electrostatic precip...

  8. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    PubMed

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  9. A Comparison of Two Methods Used for Ranking Task Exposure Levels Using Simulated Multi-Task Data

    DTIC Science & Technology

    1999-12-17

    OF OKLAHOMA HEALTH SCIENCES CENTER GRADUATE COLLEGE A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK...COSTANTINO Oklahoma City, Oklahoma 1999 ^ooo wx °^ A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK DATA... METHODS AND MATERIALS 9 TV. RESULTS 14 V. DISCUSSION AND CONCLUSION 28 LIST OF REFERENCES 31 APPENDICES 33 Appendix A JJ -in Appendix B Dl IV

  10. A comparison of digital zero-crossing and charge-comparison methods for neutron/γ-ray discrimination with liquid scintillation detectors

    NASA Astrophysics Data System (ADS)

    Nakhostin, M.

    2015-10-01

    In this paper, we have compared the performances of the digital zero-crossing and charge-comparison methods for n/γ discrimination with liquid scintillation detectors at low light outputs. The measurements were performed with a 2″×2″ cylindrical liquid scintillation detector of type BC501A whose outputs were sampled by means of a fast waveform digitizer with 10-bit resolution, 4 GS/s sampling rate and one volt input range. Different light output ranges were measured by operating the photomultiplier tube at different voltages and a new recursive algorithm was developed to implement the digital zero-crossing method. The results of our study demonstrate the superior performance of the digital zero-crossing method at low light outputs when a large dynamic range is measured. However, when the input range of the digitizer is used to measure a narrow range of light outputs, the charge-comparison method slightly outperforms the zero-crossing method. The results are discussed in regard to the effects of the quantization noise and the noise filtration performance of the zero-crossing filter.

  11. Differential Item Functioning Detection Across Two Methods of Defining Group Comparisons

    PubMed Central

    Sari, Halil Ibrahim

    2014-01-01

    This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF studies. In this study, a simulation was conducted based on data from a 60-item ACT Mathematics test (ACT; Hanson & Béguin). The unsigned area measure method (Raju) was used as the DIF detection method. An application to operational data was also completed in the study, as well as a comparison of observed Type I error rates and false discovery rates across the two methods of defining groups. Results indicate that the amount of flagged DIF or interpretations about DIF in all conditions were not the same across the two methods, and there may be some benefits to using composite group approaches. The results are discussed in connection to differing definitions of fairness. Recommendations for practice are made. PMID:29795837

  12. High-power baseline and motoring test results for the GPU-3 Stirling engine

    NASA Technical Reports Server (NTRS)

    Thieme, L. G.

    1981-01-01

    Test results are given for the full power range of the engine with both helium and hydrogen working fluids. Comparisons are made to previous testing using an alternator and resistance load bank to absorb the engine output. Indicated power results are presented as determined by several methods. Motoring tests were run to aid in determining engine mechanical losses. Comparisons are made between the results of motoring and energy-balance methods for finding mechanical losses.

  13. An Evaluation of Attitude-Independent Magnetometer-Bias Determination Methods

    NASA Technical Reports Server (NTRS)

    Hashmall, J. A.; Deutschmann, Julie

    1996-01-01

    Although several algorithms now exist for determining three-axis magnetometer (TAM) biases without the use of attitude data, there are few studies on the effectiveness of these methods, especially in comparison with attitude dependent methods. This paper presents the results of a comparison of three attitude independent methods and an attitude dependent method for computing TAM biases. The comparisons are based on in-flight data from the Extreme Ultraviolet Explorer (EUVE), the Upper Atmosphere Research Satellite (UARS), and the Compton Gamma Ray Observatory (GRO). The effectiveness of an algorithm is measured by the accuracy of attitudes computed using biases determined with that algorithm. The attitude accuracies are determined by comparison with known, extremely accurate, star-tracker-based attitudes. In addition, the effect of knowledge of calibration parameters other than the biases on the effectiveness of all bias determination methods is examined.

  14. Statistics attack on `quantum private comparison with a malicious third party' and its improvement

    NASA Astrophysics Data System (ADS)

    Gu, Jun; Ho, Chih-Yung; Hwang, Tzonelih

    2018-02-01

    Recently, Sun et al. (Quantum Inf Process:14:2125-2133, 2015) proposed a quantum private comparison protocol allowing two participants to compare the equality of their secrets via a malicious third party (TP). They designed an interesting trap comparison method to prevent the TP from knowing the final comparison result. However, this study shows that the malicious TP can use the statistics attack to reveal the comparison result. A simple modification is hence proposed to solve this problem.

  15. A novel method for pair-matching using three-dimensional digital models of bone: mesh-to-mesh value comparison.

    PubMed

    Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F

    2016-09-01

    The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.

  16. Enzyme sequence similarity improves the reaction alignment method for cross-species pathway comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovacik, Meric A.; Androulakis, Ioannis P., E-mail: yannis@rci.rutgers.edu; Biomedical Engineering Department, Rutgers University, Piscataway, NJ 08854

    2013-09-15

    Pathway-based information has become an important source of information for both establishing evolutionary relationships and understanding the mode of action of a chemical or pharmaceutical among species. Cross-species comparison of pathways can address two broad questions: comparison in order to inform evolutionary relationships and to extrapolate species differences used in a number of different applications including drug and toxicity testing. Cross-species comparison of metabolic pathways is complex as there are multiple features of a pathway that can be modeled and compared. Among the various methods that have been proposed, reaction alignment has emerged as the most successful at predicting phylogeneticmore » relationships based on NCBI taxonomy. We propose an improvement of the reaction alignment method by accounting for sequence similarity in addition to reaction alignment method. Using nine species, including human and some model organisms and test species, we evaluate the standard and improved comparison methods by analyzing glycolysis and citrate cycle pathways conservation. In addition, we demonstrate how organism comparison can be conducted by accounting for the cumulative information retrieved from nine pathways in central metabolism as well as a more complete study involving 36 pathways common in all nine species. Our results indicate that reaction alignment with enzyme sequence similarity results in a more accurate representation of pathway specific cross-species similarities and differences based on NCBI taxonomy.« less

  17. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    PubMed

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  18. Comparison of potential method in analytic hierarchy process for multi-attribute of catering service companies

    NASA Astrophysics Data System (ADS)

    Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah

    2017-08-01

    Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.

  19. Remote air pollution measurement

    NASA Technical Reports Server (NTRS)

    Byer, R. L.

    1975-01-01

    This paper presents a discussion and comparison of the Raman method, the resonance and fluorescence backscatter method, long path absorption methods and the differential absorption method for remote air pollution measurement. A comparison of the above remote detection methods shows that the absorption methods offer the most sensitivity at the least required transmitted energy. Topographical absorption provides the advantage of a single ended measurement, and differential absorption offers the additional advantage of a fully depth resolved absorption measurement. Recent experimental results confirming the range and sensitivity of the methods are presented.

  20. Results of the CCRI(II)-S12.H-3 supplementary comparison: Comparison of methods for the calculation of the activity and standard uncertainty of a tritiated-water source measured using the LSC-TDCR method.

    PubMed

    Cassette, Philippe; Altzitzoglou, Timotheos; Antohe, Andrei; Rossi, Mario; Arinc, Arzu; Capogni, Marco; Galea, Raphael; Gudelis, Arunas; Kossert, Karsten; Lee, K B; Liang, Juncheng; Nedjadi, Youcef; Oropesa Verdecia, Pilar; Shilnikova, Tanya; van Wyngaardt, Winifred; Ziemek, Tomasz; Zimmerman, Brian

    2018-04-01

    A comparison of calculations of the activity of a 3 H 2 O liquid scintillation source using the same experimental data set collected at the LNE-LNHB with a triple-to-double coincidence ratio (TDCR) counter was completed. A total of 17 laboratories calculated the activity and standard uncertainty of the LS source using the files with experimental data provided by the LNE-LNHB. The results as well as relevant information on the computation techniques are presented and analysed in this paper. All results are compatible, even if there is a significant dispersion between the reported uncertainties. An output of this comparison is the estimation of the dispersion of TDCR measurement results when measurement conditions are well defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  2. Multielement trace determination in SiC powders: assessment of interlaboratory comparisons aimed at the validation and standardization of analytical procedures with direct solid sampling based on ETV ICP OES and DC arc OES.

    PubMed

    Matschat, Ralf; Hassler, Jürgen; Traub, Heike; Dette, Angelika

    2005-12-01

    The members of the committee NMP 264 "Chemical analysis of non-oxidic raw and basic materials" of the German Standards Institute (DIN) have organized two interlaboratory comparisons for multielement determination of trace elements in silicon carbide (SiC) powders via direct solid sampling methods. One of the interlaboratory comparisons was based on the application of inductively coupled plasma optical emission spectrometry with electrothermal vaporization (ETV ICP OES), and the other on the application of optical emission spectrometry with direct current arc (DC arc OES). The interlaboratory comparisons were organized and performed in the framework of the development of two standards related to "the determination of mass fractions of metallic impurities in powders and grain sizes of ceramic raw and basic materials" by both methods. SiC powders were used as typical examples of this category of material. The aim of the interlaboratory comparisons was to determine the repeatability and reproducibility of both analytical methods to be standardized. This was an important contribution to the practical applicability of both draft standards. Eight laboratories participated in the interlaboratory comparison with ETV ICP OES and nine in the interlaboratory comparison with DC arc OES. Ten analytes were investigated by ETV ICP OES and eleven by DC arc OES. Six different SiC powders were used for the calibration. The mass fractions of their relevant trace elements were determined after wet chemical digestion. All participants followed the analytical requirements described in the draft standards. In the calculation process, three of the calibration materials were used successively as analytical samples. This was managed in the following manner: the material that had just been used as the analytical sample was excluded from the calibration, so the five other materials were used to establish the calibration plot. The results from the interlaboratory comparisons were summarized and used to determine the repeatability and the reproducibility (expressed as standard deviations) of both methods. The calculation was carried out according to the related standard. The results are specified and discussed in this paper, as are the optimized analytical conditions determined and used by the authors of this paper. For both methods, the repeatability relative standard deviations were <25%, usually ~10%, and the reproducibility relative standard deviations were <35%, usually ~15%. These results were regarded as satifactory for both methods intended for rapid analysis of materials for which decomposition is difficult and time-consuming. Also described are some results from an interlaboratory comparison used to certify one of the materials that had been previously used for validation in both interlaboratory comparisons. Thirty laboratories (from eight countries) participated in this interlaboratory comparison for certification. As examples, accepted results are shown from laboratories that used ETV ICP OES or DC arc OES and had performed calibrations by using solutions or oxides, respectively. The certified mass fractions of the certified reference materials were also compared with the mass fractions determined in the interlaboratory comparisons performed within the framework of method standardization. Good agreement was found for most of the analytes.

  3. Cellular automatons applied to gas dynamic problems

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Coopersmith, Robert M.; Mclachlan, B. G.

    1987-01-01

    This paper compares the results of a relatively new computational fluid dynamics method, cellular automatons, with experimental data and analytical results. This technique has been shown to qualitatively predict fluidlike behavior; however, there have been few published comparisons with experiment or other theories. Comparisons are made for a one-dimensional supersonic piston problem, Stokes first problem, and the flow past a normal flat plate. These comparisons are used to assess the ability of the method to accurately model fluid dynamic behavior and to point out its limitations. Reasonable results were obtained for all three test cases, but the fundamental limitations of cellular automatons are numerous. It may be misleading, at this time, to say that cellular automatons are a computationally efficient technique. Other methods, based on continuum or kinetic theory, would also be very efficient if as little of the physics were included.

  4. Price of gasoline: forecasting comparisons. [Box-Jenkins, econometric, and regression methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bopp, A.E.; Neri, J.A.

    Gasoline prices are simulated using three popular forecasting methodologies: A Box--Jenkins type method, an econometric method, and a regression method. One-period-ahead and 18-period-ahead comparisons are made. For the one-period-ahead method, a Box--Jenkins type time-series model simulated best, although all do well. However, for the 18-period simulation, the econometric and regression methods perform substantially better than the Box-Jenkins formulation. A rationale for and implications of these results ae discussed. 11 references.

  5. Comparisons of several aerodynamic methods for application to dynamic loads analyses

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Miller, R. D.

    1976-01-01

    The results of a study are presented in which the applicability at subsonic speeds of several aerodynamic methods for predicting dynamic gust loads on aircraft, including active control systems, was examined and compared. These aerodynamic methods varied from steady state to an advanced unsteady aerodynamic formulation. Brief descriptions of the structural and aerodynamic representations and of the motion and load equations are presented. Comparisons of numerical results achieved using the various aerodynamic methods are shown in detail. From these results, aerodynamic representations for dynamic gust analyses are identified. It was concluded that several aerodynamic methods are satisfactory for dynamic gust analyses of configurations having either controls fixed or active control systems that primarily affect the low frequency rigid body aircraft response.

  6. Comparison Of Methods Used In Cartography For The Skeletonisation Of Areal Objects

    NASA Astrophysics Data System (ADS)

    Szombara, Stanisław

    2015-12-01

    The article presents a method that would compare skeletonisation methods for areal objects. The skeleton of an areal object, being its linear representation, is used, among others, in cartographic visualisation. The method allows us to compare between any skeletonisation methods in terms of the deviations of distance differences between the skeleton of the object and its border from one side and the distortions of skeletonisation from another. In the article, 5 methods were compared: Voronoi diagrams, densified Voronoi diagrams, constrained Delaunay triangulation, Straight Skeleton and Medial Axis (Transform). The results of comparison were presented on the example of several areal objects. The comparison of the methods showed that in all the analysed objects the Medial Axis (Transform) gives the smallest distortion and deviation values, which allows us to recommend it.

  7. Some path-following techniques for solution of nonlinear equations and comparison with parametric differentiation

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Walters, R. W.

    1986-01-01

    Some path-following techniques are described and compared with other methods. Use of multipurpose techniques that can be used at more than one stage of the path-following computation results in a system that is relatively simple to understand, program, and use. Comparison of path-following methods with the method of parametric differentiation reveals definite advantages for the path-following methods. The fact that parametric differentiation has found a broader range of applications indicates that path-following methods have been underutilized.

  8. ClusCo: clustering and comparison of protein models.

    PubMed

    Jamroz, Michal; Kolinski, Andrzej

    2013-02-22

    The development, optimization and validation of protein modeling methods require efficient tools for structural comparison. Frequently, a large number of models need to be compared with the target native structure. The main reason for the development of Clusco software was to create a high-throughput tool for all-versus-all comparison, because calculating similarity matrix is the one of the bottlenecks in the protein modeling pipeline. Clusco is fast and easy-to-use software for high-throughput comparison of protein models with different similarity measures (cRMSD, dRMSD, GDT_TS, TM-Score, MaxSub, Contact Map Overlap) and clustering of the comparison results with standard methods: K-means Clustering or Hierarchical Agglomerative Clustering. The application was highly optimized and written in C/C++, including the code for parallel execution on CPU and GPU, which resulted in a significant speedup over similar clustering and scoring computation programs.

  9. A Comparison of Interactional Aerodynamics Methods for a Helicopter in Low Speed Flight

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Letnikov, Victor; Bavykina, Irena; Chaffin, Mark S.

    1998-01-01

    Recent advances in computing subsonic flow have been applied to helicopter configurations with various degrees of success. This paper is a comparison of two specific methods applied to a particularly challenging regime of helicopter flight, very low speeds, where the interaction of the rotor wake and the fuselage are most significant. Comparisons are made between different methods of predicting the interactional aerodynamics associated with a simple generic helicopter configuration. These comparisons are made using fuselage pressure data from a Mach-scaled powered model helicopter with a rotor diameter of approximately 3 meters. The data shown are for an advance ratio of 0.05 with a thrust coefficient of 0.0066. The results of this comparison show that in this type of complex flow both analytical techniques have regions where they are more accurate in matching the experimental data.

  10. Comparison of electron transport calculations in warm dense matter using the Ziman formula

    DOE PAGES

    Burrill, D. J.; Feinblum, D. V.; Charest, M. R. J.; ...

    2016-02-10

    The Ziman formulation of electrical conductivity is tested in warm and hot dense matter using the pseudo-atom molecular dynamics method. Several implementation options that have been widely used in the literature are systematically tested through a comparison to the accurate, but expensive Kohn–Sham density functional theory molecular dynamics (KS-DFT-MD) calculations. As a result, the comparison is made for several elements and mixtures and for a wide range of temperatures and densities, and reveals a preferred method that generally gives very good agreement with the KS-DFT-MD results, but at a fraction of the computational cost.

  11. Data processing for GPS common view time comparison between remote clocks

    NASA Astrophysics Data System (ADS)

    Li, Bian

    2004-12-01

    GPS CV method will play an important role in JATC (joint atomic time of China) system which is being rebuilt. The selection of common view data and the methods of filtering the random noise from the observed data are introduced. The methods to correct ionospheric delay and geometric delay for GPS CV comparison are expounded. The calculation results for the data of CV comparison between NTSC (National Time Service Conter, the Chinese Academy of Sciences) and CRL (Communications Research Laboratory, which has been renamed National Institute of Information and Communications Technology) are presented.

  12. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging.

    PubMed

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.

  13. Improved Estimation of Cardiac Function Parameters Using a Combination of Independent Automated Segmentation Results in Cardiovascular Magnetic Resonance Imaging

    PubMed Central

    Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe

    2015-01-01

    This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691

  14. A Stellar Dynamical Black Hole Mass for the Reverberation Mapped AGN NGC 5273

    NASA Astrophysics Data System (ADS)

    Batiste, Merida; Bentz, Misty C.; Valluri, Monica; Onken, Christopher A.

    2018-01-01

    We present preliminary results from stellar dynamical modeling of the mass of the central super-massive black hole (MBH) in the active galaxy NGC 5273. NGC 5273 is one of the few AGN with a secure MBH measurement from reverberation-mapping that is also nearby enough to measure MBH with stellar dynamical modeling. Dynamical modeling and reverberation-mapping are the two most heavily favored methods of direct MBH determination in the literature, however the specific limitations of each method means that there are very few galaxies for which both can be used. To date only two such galaxies, NGC 3227 and NGC 4151, have MBH determinations from both methods. Given this small sample size, it is not yet clear that the two methods give consistent results. Moreover, given the inherent uncertainties and potential systematic biases in each method, it is likewise unclear whether one method should be preferred over the other. This study is part of an ongoing project to increase the sample of galaxies with secure MBH measurements from both methods, so that a direct comparison may be made. NGC 5273 provides a particularly valuable comparison because it is free of kinematic substructure (e.g. the presence of a bar, as is the case for NGC 4151) which can complicate and potentially bias results from stellar dynamical modeling. I will discuss our current results as well as the advantages and limitations of each method, and the potential sources of systematic bias that may affect comparison between results.

  15. SURVEYS OF FALLOUT SHELTER--A COMPARISON BETWEEN AERIAL PHOTOGRAPHIC AND DOCUMENTARY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleinecke, D.C.

    1960-02-01

    In 1959 a large part of Contra Costa County, California, was surveyed for fallout shelter areas. This survey was based on an examination of the tax assessor's records of existing buildings. A portion of this area was also surveyed independently by a method based on aerial photography. A statistical comparison of the results of these two surveys indicates that the aerial photographic method was more efficient than the documentary method in locating potential shelter space in buildings of heavy construction. This result, however, is probably not operationally significant. There is reason to believe that a combination of these two surveymore » methods could be devised which would be operationally preferable to either method. (auth)« less

  16. Realization of the medium and high vacuum primary standard in CENAM, Mexico

    NASA Astrophysics Data System (ADS)

    Torres-Guzman, J. C.; Santander, L. A.; Jousten, K.

    2005-12-01

    A medium and high vacuum primary standard, based on the static expansion method, has been set up at Centro Nacional de Metrología (CENAM), Mexico. This system has four volumes and covers a measuring range of 1 × 10-5 Pa to 1 × 103 Pa of absolute pressure. As part of its realization, a characterization was performed, which included volume calibrations, several tests and a bilateral key comparison. To determine the expansion ratios, two methods were applied: the gravimetric method and the method with a linearized spinning rotor gauge. The outgassing ratios for the whole system were also determined. A comparison was performed with Physikalisch-Technische Bundesanstalt (comparison SIM-Euromet.M.P-BK3). By means of this comparison, a link has been achieved with the Euromet comparison (Euromet.M.P-K1.b). As a result, it is concluded that the value obtained at CENAM is equivalent to the Euromet reference value, and therefore the design, construction and operation of CENAM's SEE-1 vacuum primary standard were successful.

  17. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results ofmore » such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech.« less

  18. Turboexpander calculations using a generalized equation of state correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, M.S.; Starling, K.E.

    1975-01-01

    A generalized method for predicting the thermodynamic properties of natural gas fluids has been developed and tested. The results of several comparisons between thermodynamic property values predicted by the method and experimental data are presented. Comparisons of predicted and experimental vapor-liquid equilibrium are presented. These comparisons indicate that the generalized correlation can be used to predict many thermodynamic properties of natural gas and LNG. Turboexpander calculations are presented to show the utility of the generalized correlation for process design calculations.

  19. Agreement of Tracing and Direct Viewing Techniques for Cervical Vertebral Maturation Assessment.

    PubMed

    Wiwatworakul, Opas; Manosudprasit, Montian; Pisek, Poonsak; Chatrchaiwiwatana, Supaporn; Wangsrimongkol, Tasanee

    2015-08-01

    This study aimed to evaluate agreement among three methods for cervical vertebral maturation (CVM) assessment, comprising direct viewing, tracing only, and tracing with digitized points. Two examiners received training and tests of reliability with each CVM method before evaluation of agreement among methods. The subjects were 96 female-cleft lateral cephalometric radiographs (films of eight subjects for each age ranged from seven to 18 years). The examiners interpreted CVM stages of the subjects with four-week interval between uses of each method. The range of weighted kappa values for paired comparisons among the three methods were: 0.96-0.98 for direct viewing and tracing only comparison; 0.93-0.94 for direct viewing and tracing with digitized points comparison; and 0.96-0.97 for tracing only and tracing with digitized points comparison. The intraclass correlation coefficient (ICC) value among the three methods was 0.95. These results indicated very good agreement among methods. Use of direct viewing is suitable for CVM assessment without spending more time for tracing. However, the three methods might be used interchangeably.

  20. KEY COMPARISON: Final report on CCQM-K69 key comparison: Testosterone glucuronide in human urine

    NASA Astrophysics Data System (ADS)

    Liu, Fong-Ha; Mackay, Lindsey; Murby, John

    2010-01-01

    The CCQM-K69 key comparison of testosterone glucuronide in human urine was organized under the auspices of the CCQM Organic Analysis Working Group (OAWG). The National Measurement Institute Australia (NMIA) acted as the coordinating laboratory for the comparison. The samples distributed for the key comparison were prepared at NMIA with funding from the World Anti-Doping Agency (WADA). WADA granted the approval for this material to be used for the intercomparison provided the distribution and handling of the material were strictly controlled. Three national metrology institutes (NMIs)/designated institutes (DIs) developed reference methods and submitted data for the key comparison along with two other laboratories who participated in the parallel pilot study. A good selection of analytical methods and sample workup procedures was displayed in the results submitted considering the complexities of the matrix involved. The comparability of measurement results was successfully demonstrated by the participating NMIs. Only the key comparison data were used to estimate the key comparison reference value (KCRV), using the arithmetic mean approach. The reported expanded uncertainties for results ranged from 3.7% to 6.7% at the 95% level of confidence and all results agreed within the expanded uncertainty of the KCRV. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  1. The differential path phase comparison method for determining pressure derivatives of elastic constants of solids

    NASA Astrophysics Data System (ADS)

    Peselnick, L.

    1982-08-01

    An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.

  2. A comparison of automated crater detection methods

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Barreira, C.; Pina, P.; Saraiva, J.

    2008-09-01

    Abstract This work presents early results of a comparison between some common methodologies for automated crater detection. The three procedures considered were applied to images of the surface of Mars, thus illustrating some pros and cons of their use. We aim to establish the clear advantages in using this type of methods in the study of planetary surfaces.

  3. Resting-state fMRI data reflects default network activity rather than null data: A defense of commonly employed methods to correct for multiple comparisons.

    PubMed

    Slotnick, Scott D

    2017-07-01

    Analysis of functional magnetic resonance imaging (fMRI) data typically involves over one hundred thousand independent statistical tests; therefore, it is necessary to correct for multiple comparisons to control familywise error. In a recent paper, Eklund, Nichols, and Knutsson used resting-state fMRI data to evaluate commonly employed methods to correct for multiple comparisons and reported unacceptable rates of familywise error. Eklund et al.'s analysis was based on the assumption that resting-state fMRI data reflect null data; however, their 'null data' actually reflected default network activity that inflated familywise error. As such, Eklund et al.'s results provide no basis to question the validity of the thousands of published fMRI studies that have corrected for multiple comparisons or the commonly employed methods to correct for multiple comparisons.

  4. GENOME-WIDE COMPARATIVE ANALYSIS OF PHYLOGENETIC TREES: THE PROKARYOTIC FOREST OF LIFE

    PubMed Central

    Puigbò, Pere; Wolf, Yuri I.; Koonin, Eugene V.

    2013-01-01

    Genome-wide comparison of phylogenetic trees is becoming an increasingly common approach in evolutionary genomics, and a variety of approaches for such comparison have been developed. In this article we present several methods for comparative analysis of large numbers of phylogenetic trees. To compare phylogenetic trees taking into account the bootstrap support for each internal branch, the Boot-Split Distance (BSD) method is introduced as an extension of the previously developed Split Distance (SD) method for tree comparison. The BSD method implements the straightforward idea that comparison of phylogenetic trees can be made more robust by treating tree splits differentially depending on the bootstrap support. Approaches are also introduced for detecting tree-like and net-like evolutionary trends in the phylogenetic Forest of Life (FOL), i.e., the entirety of the phylogenetic trees for conserved genes of prokaryotes. The principal method employed for this purpose includes mapping quartets of species onto trees to calculate the support of each quartet topology and so to quantify the tree and net contributions to the distances between species. We describe the applications methods used to analyze the FOL and the results obtained with these methods. These results support the concept of the Tree of Life (TOL) as a central evolutionary trend in the FOL as opposed to the traditional view of the TOL as a ‘species tree’. PMID:22399455

  5. Genome-wide comparative analysis of phylogenetic trees: the prokaryotic forest of life.

    PubMed

    Puigbò, Pere; Wolf, Yuri I; Koonin, Eugene V

    2012-01-01

    Genome-wide comparison of phylogenetic trees is becoming an increasingly common approach in evolutionary genomics, and a variety of approaches for such comparison have been developed. In this article, we present several methods for comparative analysis of large numbers of phylogenetic trees. To compare phylogenetic trees taking into account the bootstrap support for each internal branch, the Boot-Split Distance (BSD) method is introduced as an extension of the previously developed Split Distance method for tree comparison. The BSD method implements the straightforward idea that comparison of phylogenetic trees can be made more robust by treating tree splits differentially depending on the bootstrap support. Approaches are also introduced for detecting tree-like and net-like evolutionary trends in the phylogenetic Forest of Life (FOL), i.e., the entirety of the phylogenetic trees for conserved genes of prokaryotes. The principal method employed for this purpose includes mapping quartets of species onto trees to calculate the support of each quartet topology and so to quantify the tree and net contributions to the distances between species. We describe the application of these methods to analyze the FOL and the results obtained with these methods. These results support the concept of the Tree of Life (TOL) as a central evolutionary trend in the FOL as opposed to the traditional view of the TOL as a "species tree."

  6. Treatment of transverse patellar fractures: a comparison between metallic and non-metallic implants.

    PubMed

    Heusinkveld, Maarten H G; den Hamer, Anniek; Traa, Willeke A; Oomen, Pim J A; Maffulli, Nicola

    2013-01-01

    Several methods of transverse patellar fixation have been described. This study compares the clinical outcome and the occurrence of complications of various fixation methods. The databases PubMed, Web of Science, Science Direct, Google Scholar and Google were searched. A direct comparison between fixation techniques using mixed or non-metallic implants and metallic K-wire and tension band fixation shows no significant difference in clinical outcome between both groups. Additionally, studies reporting novel operation techniques show good clinical results. Studies describing the treatment of patients using non-metallic or mixed implants are fewer compared with those using metallic fixation. A large variety of clinical scoring systems were used for assessing the results of treatment, which makes direct comparison difficult. More data of fracture treatment using non-metallic or mixed implants is needed to achieve a more balanced comparison.

  7. Comparison of ozone determinations by ultraviolet photometry and gas-phase titration

    NASA Technical Reports Server (NTRS)

    Demore, W. B.; Patapoff, M.

    1976-01-01

    A comparison of ozone determinations based on ultraviolet absorption photometry and gas-phase titration (GPT) shows good agreement between the two methods. Together with other results, these findings indicate that three candidate reference methods for ozone, UV photometry, IR photometry, and GPT are in substantial agreement. However, the GPT method is not recommended for routine use by air pollution agencies for calibration of ozone monitors because of susceptibility to experimental error.

  8. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  9. Characterization and comparison of emissions from rudimentary waste disposal technologies

    EPA Science Inventory

    Results from 2011 simulation of burn pit emissions and air curtain incinerator emissions, recent developments in methods for open air sampling, comparison of waste energy technologies, current SERDP programs in this area.

  10. Operating Reserves and Wind Power Integration: An International Comparison; Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milligan, M.; Donohoo, P.; Lew, D.

    2010-10-01

    This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.

  11. Standard methods for open hole tension testing of textile composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.; Masters, J. E.

    1995-01-01

    Sizing effects have been investigated by comparing the open hole failure strengths of each of the four different braided architectures as a function of specimen thickness, hole diameter, and the ratio of specimen width to hole diameter. The data used to make these comparisons was primarily generated by Boeing. Direct comparisons of Boeing's results were made with experiments conducted at West Virginia University whenever possible. Indirect comparisons were made with test results for other 2-D braids and 3-D weaves tested by Boeing and Lockheed. In general, failure strength was found to decrease with increasing plate thickness, increase with decreasing hole size, and decreasing with decreasing width to diameter ratio. The interpretation of the sensitive to each of these geometrical parameters was complicated by scatter in the test data. For open hole tension testing of textile composites, the use of standard testing practices employed by industry, such as ASTM D5766 - Standard Test Method for Open Hole Tensile Strength of Polymer Matrix Composite Laminates should provide adequate results for material comparisons studies.

  12. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less

  13. Comparison study of two procedures for the determination of emamectin benzoate in medicated fish feed.

    PubMed

    Farer, Leslie J; Hayes, John M

    2005-01-01

    A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.

  14. A method for improving reliability and relevance of LCA reviews: the case of life-cycle greenhouse gas emissions of tap and bottled water.

    PubMed

    Fantin, Valentina; Scalbi, Simona; Ottaviano, Giuseppe; Masoni, Paolo

    2014-04-01

    The purpose of this study is to propose a method for harmonising Life Cycle Assessment (LCA) literature studies on the same product or on different products fulfilling the same function for a reliable and meaningful comparison of their life-cycle environmental impacts. The method is divided in six main steps which aim to rationalize and quicken the efforts needed to carry out the comparison. The steps include: 1) a clear definition of the goal and scope of the review; 2) critical review of the references; 3) identification of significant parameters that have to be harmonised; 4) harmonisation of the parameters; 5) statistical analysis to support the comparison; 6) results and discussion. This approach was then applied to the comparative analysis of the published LCA studies on tap and bottled water production, focussing on Global Warming Potential (GWP) results, with the aim to identify the environmental preferable alternative. A statistical analysis with Wilcoxon's test confirmed that the difference between harmonised GWP values of tap and bottled water was significant. The results obtained from the comparison of the harmonised mean GWP results showed that tap water always has the best environmental performance, even in case of high energy-consuming technologies for drinking water treatments. The strength of the method is that it enables both performing a deep analysis of the LCA literature and obtaining more consistent comparisons across the published LCAs. For these reasons, it can be a valuable tool which provides useful information for both practitioners and decision makers. Finally, its application to the case study allowed both to supply a description of systems variability and to evaluate the importance of several key parameters for tap and bottled water production. The comparative review of LCA studies, with the inclusion of a statistical decision test, can validate and strengthen the final statements of the comparison. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. A Comparison of Cut Scores Using Multiple Standard Setting Methods.

    ERIC Educational Resources Information Center

    Impara, James C.; Plake, Barbara S.

    This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…

  16. Comparison of aerodynamic coefficients obtained from theoretical calculations wind tunnel tests and flight tests data reduction for the alpha jet aircraft

    NASA Technical Reports Server (NTRS)

    Guiot, R.; Wunnenberg, H.

    1980-01-01

    The methods by which aerodynamic coefficients are determined and discussed. These include: calculations, wind tunnel experiments and experiments in flight for various prototypes of the Alpha Jet. A comparison of obtained results shows good correlation between expectations and in-flight test results.

  17. Comparison of heavy-ion- and electron-beam upset data for GaAS SRAMS. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flesner, L.D.; Zuleeg, R.; Kolasinski, W.A.

    1992-07-16

    We report the results of experiments designed to evaluate the extent to which focused electron-beam pulses simulate energetic ion upset phenomena in GaAs memory circuits fabricated by the McDonnell Douglas Astronautics Company. The results of two experimental methods were compared, irradiation by heavy-ion particle beams, and upset mapping using focused electron pulses. Linear energy transfer (LET) thresholds and upset cross sections are derived from the data for both methods. A comparison of results shows good agreement, indicating that for these circuits electron-beam pulse mapping is a viable simulation technique.

  18. An evaluation of objective rating methods for full-body finite element model comparison to PMHS tests.

    PubMed

    Vavalle, Nicholas A; Jelen, Benjamin C; Moreno, Daniel P; Stitzel, Joel D; Gayzik, F Scott

    2013-01-01

    Objective evaluation methods of time history signals are used to quantify how well simulated human body responses match experimental data. As the use of simulations grows in the field of biomechanics, there is a need to establish standard approaches for comparisons. There are 2 aims of this study. The first is to apply 3 objective evaluation methods found in the literature to a set of data from a human body finite element model. The second is to compare the results of each method, examining how they are correlated to each other and the relative strengths and weaknesses of the algorithms. In this study, the methods proposed by Sprague and Geers (magnitude and phase error, SGM and SGP), Rhule et al. (cumulative standard deviation, CSD), and Gehre et al. (CORrelation and Analysis, or CORA, size, phase, shape, corridor) were compared. A 40 kph frontal sled test presented by Shaw et al. was simulated using the Global Human Body Models Consortium midsized male full-body finite element model (v. 3.5). Mean and standard deviation experimental data (n = 5) from Shaw et al. were used as the benchmark. Simulated data were output from the model at the appropriate anatomical locations for kinematic comparison. Force data were output at the seat belts, seat pan, knee, and foot restraints. Objective comparisons from 53 time history data channels were compared to the experimental results. To compare the different methods, all objective comparison metrics were cross-plotted and linear regressions were calculated. The following ratings were found to be statistically significantly correlated (P < .01): SGM and CORrelation and Analysis (CORA) size, R (2) = 0.73; SGP and CORA shape, R (2) = 0.82; and CSD and CORA's corridor factor, R (2) = 0.59. Relative strengths of the correlated ratings were then investigated. For example, though correlated to CORA size, SGM carries a sign to indicate whether the simulated response is greater than or less than the benchmark signal. A further analysis of the advantages and drawbacks of each method is discussed. The results demonstrate that a single metric is insufficient to provide a complete assessment of how well the simulated results match the experiments. The CORA method provided the most comprehensive evaluation of the signal. Regardless of the method selected, one primary recommendation of this work is that for any comparison, the results should be reported to provide separate assessments of a signal's match to experimental variance, magnitude, phase, and shape. Future work planned includes implementing any forthcoming International Organization for Standardization standards for objective evaluations. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.

  19. Comparison of Quantitative Antifungal Testing Methods for Textile Fabrics.

    PubMed

    Imoto, Yasuo; Seino, Satoshi; Nakagawa, Takashi; Yamamoto, Takao A

    2017-01-01

     Quantitative antifungal testing methods for textile fabrics under growth-supportive conditions were studied. Fungal growth activities on unfinished textile fabrics and textile fabrics modified with Ag nanoparticles were investigated using the colony counting method and the luminescence method. Morphological changes of the fungi during incubation were investigated by microscopic observation. Comparison of the results indicated that the fungal growth activity values obtained with the colony counting method depended on the morphological state of the fungi on textile fabrics, whereas those obtained with the luminescence method did not. Our findings indicated that unique characteristics of each testing method must be taken into account for the proper evaluation of antifungal activity.

  20. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    NASA Astrophysics Data System (ADS)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; Radney, James G.; Kolesar, Katheryn R.; Zhang, Qi; Setyan, Ari; O'Neill, Norman T.; Cappa, Christopher D.

    2018-04-01

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM1 and PM10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.

  1. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE PAGES

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli; ...

    2018-04-23

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  2. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Here, multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare wellmore » with other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  3. Using spectral methods to obtain particle size information from optical data: applications to measurements from CARES 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atkinson, Dean B.; Pekour, Mikhail; Chand, Duli

    Multi-wavelength in situ aerosol extinction, absorption and scattering measurements made at two ground sites during the 2010 Carbonaceous Aerosols and Radiative Effects Study (CARES) are analyzed using a spectral deconvolution method that allows extraction of particle-size-related information, including the fraction of extinction produced by the fine-mode particles and the effective radius of the fine mode. The spectral deconvolution method is typically applied to analysis of remote sensing measurements. Here, its application to in situ measurements allows for comparison with more direct measurement methods and validation of the retrieval approach. Overall, the retrieved fine-mode fraction and effective radius compare well withmore » other in situ measurements, including size distribution measurements and scattering and absorption measurements made separately for PM 1 and PM 10, although there were some periods during which the different methods yielded different results. One key contributor to differences between the results obtained is the alternative, spectrally based definitions of fine and coarse modes from the optical methods, relative to instruments that use a physically defined cut point. These results indicate that for campaigns where size, composition and multi-wavelength optical property measurements are made, comparison of the results can result in closure or can identify unusual circumstances. The comparison here also demonstrates that in situ multi-wavelength optical property measurements can be used to determine information about particle size distributions in situations where direct size distribution measurements are not available.« less

  4. On assessing bioequivalence and interchangeability between generics based on indirect comparisons.

    PubMed

    Zheng, Jiayin; Chow, Shein-Chung; Yuan, Mengdie

    2017-08-30

    As more and more generics become available in the market place, the safety/efficacy concerns may arise as the result of interchangeably use of approved generics. However, bioequivalence assessment for regulatory approval among generics of the innovative drug product is not required. In practice, approved generics are often used interchangeably without any mechanism of safety monitoring. In this article, based on indirect comparisons, we proposed several methods to assessing bioequivalence and interchangeability between generics. The applicability of the methods and the similarity assumptions were discussed, as well as the inappropriateness of directly adopting adjusted indirect comparison to the field of generics' comparison. Besides, some extensions were given to take into consideration the important topics in clinical trials for bioequivalence assessments, for example, multiple comparisons and simultaneously testing bioequivalence among three generics. Extensive simulation studies were conducted to investigate the performances of the proposed methods. The studies of malaria generics and HIV/AIDS generics prequalified by the WHO were used as real examples to demonstrate the use of the methods. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Prediction of optimum sorption isotherm: comparison of linear and non-linear method.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-11-11

    Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  6. Potential and viscous flow in VTOL, STOL or CTOL propulsion system inlets

    NASA Technical Reports Server (NTRS)

    Stockman, N. O.

    1975-01-01

    A method was developed for analyzing the flow in subsonic axisymmetric inlets at arbitrary conditions of freestream velocity, incidence angle, and inlet mass flow. An improved version of the method is discussed and comparisons of results obtained with the original and improved methods are given. Comparisons with experiments are also presented for several inlet configurations and for various conditions of the boundary layer from insignificant to separated. Applications of the method are discussed, with several examples given for specific cases involving inlets for VTOL lift fans and for STOL engine nacelles.

  7. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  8. Preliminary topical report on comparison reactor disassembly calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaughlin, T.P.

    1975-11-01

    Preliminary results of comparison disassembly calculations for a representative LMFBR model (2100-l voided core) and arbitrary accident conditions are described. The analytical methods employed were the computer programs: FX2- POOL, PAD, and VENUS-II. The calculated fission energy depositions are in good agreement, as are measures of the destructive potential of the excursions, kinetic energy, and work. However, in some cases the resulting fuel temperatures are substantially divergent. Differences in the fission energy deposition appear to be attributable to residual inconsistencies in specifying the comparison cases. In contrast, temperature discrepancies probably stem from basic differences in the energy partition models inherentmore » in the codes. Although explanations of the discrepancies are being pursued, the preliminary results indicate that all three computational methods provide a consistent, global characterization of the contrived disassembly accident. (auth)« less

  9. Advantages of high-dose rate (HDR) brachytherapy in treatment of prostate cancer

    NASA Astrophysics Data System (ADS)

    Molokov, A. A.; Vanina, E. A.; Tseluyko, S. S.

    2017-09-01

    One of the modern methods of preserving organs radiation treatment is brachytherapy. This article analyzes the results of prostate brachytherapy. These studies of the advantages of high dose brachytherapy lead to the conclusion that this method of radiation treatment for prostate cancer has a favorable advantage in comparison with remote sensing methods, and is competitive, preserving organs in comparison to surgical methods of treatment. The use of the method of polyfocal transperineal biopsy during the brachytherapy session provides information on the volumetric spread of prostate cancer and adjust the dosimetry plan taking into account the obtained data.

  10. SUPPLEMENTARY COMPARISON: APMP.PR-S1 comparison of irradiance responsivity of UVA detectors

    NASA Astrophysics Data System (ADS)

    Xu, Gan; Huang, Xuebo; Liu, Yuanjie

    2007-01-01

    APMP.PR-S1, a supplementary comparison of irradiance responsivity of UVA detectors, was carried out among seven national metrology institutes piloted by SPRING Singapore from 2003 to 2005. Two quantities, the narrow band UV (365 nm ± 5 nm) irradiance responsivity and the broad band UVA (315 nm-400 nm) irradiance responsivity of the transfer detectors, have been compared. Commercial UV source (medium pressure mercury short arc lamp) and UVA detectors were used as transfer standards in the comparison. Measurement results from participants were reported and their uncertainties associated with the comparison were analysed in this report. The method of weighted mean with cut-off was used to calculate the comparison reference values. The results from most participating labs lie within ±5% against the comparison reference values with a few exceptions. The degree of agreement of the comparison depends not only on the base scales of spectral responsivity and spectral irradiance of a laboratory, but also equally importantly on the method used for the measurement. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  11. A Comparison of Three Methods for the Collection of L2 Data: Free Composition, Translation, and Picture Description. Working Papers on Bilingualism, No. 8.

    ERIC Educational Resources Information Center

    LoCoco, Veronica Gonzalez-Mena

    Three methods for second language data collection are compared: free composition, picture description and translation. The comparison is based on percentage of errors in a grammatical category and in a source category. Most results obtained from the free compositions and picture descriptions tended to be similar. Greater variation was found for…

  12. Comparison Campaign of VLBI Data Analysis Software - First Results

    NASA Technical Reports Server (NTRS)

    Plank, Lucia; Bohm, Johannes; Schuh, Harald

    2010-01-01

    During the development of the Vienna VLBI Software VieVS at the Institute of Geodesy and Geophysics at Vienna University of Technology, a special comparison setup was developed with the goal of easily finding links between deviations of results achieved with different software packages and certain parameters of the observation. The object of comparison is the computed time delay, a value calculated for each observation including all relevant models and corrections that need to be applied in geodetic VLBI analysis. Besides investigating the effects of the various models on the total delay, results of comparisons between VieVS and Occam 6.1 are shown. Using the same methods, a Comparison Campaign of VLBI data analysis software called DeDeCC is about to be launched within the IVS soon.

  13. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews.

    PubMed

    Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G

    2009-04-03

    To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.

  14. A scoping review of indirect comparison methods and applications using individual patient data.

    PubMed

    Veroniki, Areti Angeliki; Straus, Sharon E; Soobiah, Charlene; Elliott, Meghan J; Tricco, Andrea C

    2016-04-27

    Several indirect comparison methods, including network meta-analyses (NMAs), using individual patient data (IPD) have been developed to synthesize evidence from a network of trials. Although IPD indirect comparisons are published with increasing frequency in health care literature, there is no guidance on selecting the appropriate methodology and on reporting the methods and results. In this paper we examine the methods and reporting of indirect comparison methods using IPD. We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until October 2014. We included published and unpublished studies reporting a method, application, or review of indirect comparisons using IPD and at least three interventions. We identified 37 papers, including a total of 33 empirical networks. Of these, only 9 (27 %) IPD-NMAs reported the existence of a study protocol, whereas 3 (9 %) studies mentioned that protocols existed without providing a reference. The 33 empirical networks included 24 (73 %) IPD-NMAs and 9 (27 %) matching adjusted indirect comparisons (MAICs). Of the 21 (64 %) networks with at least one closed loop, 19 (90 %) were IPD-NMAs, 13 (68 %) of which evaluated the prerequisite consistency assumption, and only 5 (38 %) of the 13 IPD-NMAs used statistical approaches. The median number of trials included per network was 10 (IQR 4-19) (IPD-NMA: 15 [IQR 8-20]; MAIC: 2 [IQR 3-5]), and the median number of IPD trials included in a network was 3 (IQR 1-9) (IPD-NMA: 6 [IQR 2-11]; MAIC: 2 [IQR 1-2]). Half of the networks (17; 52 %) applied Bayesian hierarchical models (14 one-stage, 1 two-stage, 1 used IPD as an informative prior, 1 unclear-stage), including either IPD alone or with aggregated data (AD). Models for dichotomous and continuous outcomes were available (IPD alone or combined with AD), as were models for time-to-event data (IPD combined with AD). One in three indirect comparison methods modeling IPD adjusted results from different trials to estimate effects as if they had come from the same, randomized, population. Key methodological and reporting elements (e.g., evaluation of consistency, existence of study protocol) were often missing from an indirect comparison paper.

  15. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  16. Comparison of formant detection methods used in speech processing applications

    NASA Astrophysics Data System (ADS)

    Belean, Bogdan

    2013-11-01

    The paper describes time frequency representations of speech signal together with the formant significance in speech processing applications. Speech formants can be used in emotion recognition, sex discrimination or diagnosing different neurological diseases. Taking into account the various applications of formant detection in speech signal, two methods for detecting formants are presented. First, the poles resulted after a complex analysis of LPC coefficients are used for formants detection. The second approach uses the Kalman filter for formant prediction along the speech signal. Results are presented for both approaches on real life speech spectrograms. A comparison regarding the features of the proposed methods is also performed, in order to establish which method is more suitable in case of different speech processing applications.

  17. Key comparison CCPR-K1.a as an interlaboratory comparison of correlated color temperature

    NASA Astrophysics Data System (ADS)

    Kärhä, P.; Vaskuri, A.; Pulli, T.; Ikonen, E.

    2018-02-01

    We analyze the results of spectral irradiance key comparison CCPR-K1.a for correlated color temperature (CCT). For four participants out of 13, the uncertainties of CCT, calculated using traditional methods, not accounting for correlations, would be too small. The reason for the failure of traditional uncertainty calculation is spectral correlations, producing systematic deviations of the same sign over certain wavelength regions. The results highlight the importance of accounting for such correlations when calculating uncertainties of spectrally integrated quantities.

  18. A comparative study of different methods for calculating electronic transition rates

    NASA Astrophysics Data System (ADS)

    Kananenka, Alexei A.; Sun, Xiang; Schubert, Alexander; Dunietz, Barry D.; Geva, Eitan

    2018-03-01

    We present a comprehensive comparison of the following mixed quantum-classical methods for calculating electronic transition rates: (1) nonequilibrium Fermi's golden rule, (2) mixed quantum-classical Liouville method, (3) mean-field (Ehrenfest) mixed quantum-classical method, and (4) fewest switches surface-hopping method (in diabatic and adiabatic representations). The comparison is performed on the Garg-Onuchic-Ambegaokar benchmark charge-transfer model, over a broad range of temperatures and electronic coupling strengths, with different nonequilibrium initial states, in the normal and inverted regimes. Under weak to moderate electronic coupling, the nonequilibrium Fermi's golden rule rates are found to be in good agreement with the rates obtained via the mixed quantum-classical Liouville method that coincides with the fully quantum-mechanically exact results for the model system under study. Our results suggest that the nonequilibrium Fermi's golden rule can serve as an inexpensive yet accurate alternative to Ehrenfest and the fewest switches surface-hopping methods.

  19. Geometric facial comparisons in speed-check photographs.

    PubMed

    Buck, Ursula; Naether, Silvio; Kreutz, Kerstin; Thali, Michael

    2011-11-01

    In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.

  20. Estimation of CO2 emissions from waste incinerators: Comparison of three methods.

    PubMed

    Lee, Hyeyoung; Yi, Seung-Muk; Holsen, Thomas M; Seo, Yong-Seok; Choi, Eunhwa

    2018-03-01

    Climate-relevant CO 2 emissions from waste incineration were compared using three methods: making use of CO 2 concentration data, converting O 2 concentration and waste characteristic data, and using a mass balance method following Intergovernmental Panel on Climate Change (IPCC) guidelines. For the first two methods, CO 2 and O 2 concentrations were measured continuously from 24 to 86 days. The O 2 conversion method in comparison to the direct CO 2 measurement method had a 4.8% mean difference in daily CO 2 emissions for four incinerators where analyzed waste composition data were available. However, the IPCC method had a higher difference of 13% relative to the direct CO 2 measurement method. For three incinerators using designed values for waste composition, the O 2 conversion and IPCC methods in comparison to the direct CO 2 measurement method had mean differences of 7.5% and 89%, respectively. Therefore, the use of O 2 concentration data measured for monitoring air pollutant emissions is an effective method for estimating CO 2 emissions resulting from waste incineration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Missing defects? A comparison of microscopic and macroscopic approaches to identifying linear enamel hypoplasia.

    PubMed

    Hassett, Brenna R

    2014-03-01

    Linear enamel hypoplasia (LEH), the presence of linear defects of dental enamel formed during periods of growth disruption, is frequently analyzed in physical anthropology as evidence for childhood health in the past. However, a wide variety of methods for identifying and interpreting these defects in archaeological remains exists, preventing easy cross-comparison of results from disparate studies. This article compares a standard approach to identifying LEH using the naked eye to the evidence of growth disruption observed microscopically from the enamel surface. This comparison demonstrates that what is interpreted as evidence of growth disruption microscopically is not uniformly identified with the naked eye, and provides a reference for the level of consistency between the number and timing of defects identified using microscopic versus macroscopic approaches. This is done for different tooth types using a large sample of unworn permanent teeth drawn from several post-medieval London burial assemblages. The resulting schematic diagrams showing where macroscopic methods achieve more or less similar results to microscopic methods are presented here and clearly demonstrate that "naked-eye" methods of identifying growth disruptions do not identify LEH as often as microscopic methods in areas where perikymata are more densely packed. Copyright © 2013 Wiley Periodicals, Inc.

  2. 75 FR 14569 - Polyethylene Retail Carrier Bags from Taiwan: Final Determination of Sales at Less Than Fair Value

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-26

    ... excludes (1) polyethylene bags that are not printed with logos or store names and that are closeable with... comparison methodology to TCI's targeted sales and the average-to-average comparison methodology to TCI's non... average-to-average comparison method does not account for such price differences and results in the...

  3. A comparison of Thellier-type and multispecimen paleointensity determinations on Pleistocene and historical lava flows from Lanzarote (Canary Islands, Spain)

    NASA Astrophysics Data System (ADS)

    Calvo-Rathert, Manuel; Morales-Contreras, Juan; Carrancho, Ángel; Goguitchaichvili, Avto

    2016-09-01

    Sixteen Miocene, Pleistocene, and historic lava flows have been sampled in Lanzarote (Canary Islands) for paleointensity analysis with both the Coe and multispecimen methods. Besides obtaining new data, the main goal of the study was the comparison of paleointensity results determined with two different techniques. Characteristic Remanent Magnetization (ChRM) directions were obtained in 15 flows, and 12 were chosen for paleointensity determination. In Thellier-type experiments, a selection of reliable paleointensity determinations (43 of 78 studied samples) was performed using sets of criteria of different stringency, trying to relate the quality of results to the strictness of the chosen criteria. Uncorrected and fraction and domain-state corrected multispecimen paleointensity results were obtained in all flows. Results with the Coe method on historical flows either agree with the expected values or show moderately lower ones, but multispecimen determinations display a large deviation from the expected result in one case. No relation can be detected between correct or anomalous results and paleointensity determination quality or rock-magnetic properties. However, results on historical flows suggest that agreement between both methods could be a good indicator of correct determinations. Comparison of results obtained with both methods on seven Pleistocene flows yields an excellent agreement in four and disagreements in three cases. Pleistocene determinations were only accepted if either results from both methods agreed or a result was based on a sufficiently large number (n > 4) of individual Thellier-type determinations. In most Pleistocene flows, a VADM around 5 × 1022 Am2 was observed, although two flows displayed higher values around 9 × 1022 Am2.

  4. [Comparisons of manual and automatic refractometry with subjective results].

    PubMed

    Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C

    2006-11-01

    Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.

  5. Outdoor module testing and comparison of photovoltaic technologies

    NASA Astrophysics Data System (ADS)

    Fabick, L. B.; Rifai, R.; Mitchell, K.; Woolston, T.; Canale, J.

    A comparison of outdoor test results for several module technologies is presented. The technologies include thin-film silicon:hydrogen alloys (TFS), TFS modules with semitransparent conductor back contacts, and CuInSe2 module prototypes. A method for calculating open-circuit voltage and fill-factor temperature coefficients is proposed. The method relies on the acquisition of large statistical data samples to average effects due to varying insolation level.

  6. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  7. Towards Extending Forward Kinematic Models on Hyper-Redundant Manipulator to Cooperative Bionic Arms

    NASA Astrophysics Data System (ADS)

    Singh, Inderjeet; Lakhal, Othman; Merzouki, Rochdi

    2017-01-01

    Forward Kinematics is a stepping stone towards finding an inverse solution and subsequently a dynamic model of a robot. Hence a study and comparison of various Forward Kinematic Models (FKMs) is necessary for robot design. This paper deals with comparison of three FKMs on the same hyper-redundant Compact Bionic Handling Assistant (CBHA) manipulator under same conditions. The aim of this study is to project on modeling cooperative bionic manipulators. Two of these methods are quantitative methods, Arc Geometry HTM (Homogeneous Transformation Matrix) Method and Dual Quaternion Method, while the other one is Hybrid Method which uses both quantitative as well as qualitative approach. The methods are compared theoretically and experimental results are discussed to add further insight to the comparison. HTM is the widely used and accepted technique, is taken as reference and trajectory deviation in other techniques are compared with respect to HTM. Which method allows obtaining an accurate kinematic behavior of the CBHA, controlled in the real-time.

  8. Vertical distribution of ozone at the terminator on Mars

    NASA Astrophysics Data System (ADS)

    Maattanen, Anni; Lefevre, Franck; Guilbon, Sabrina; Listowski, Constantino; Montmessin, Franck

    2016-10-01

    The SPICAM/Mars Express UV solar occultation dataset gives access to the ozone vertical distribution via the ozone absorption in the Hartley band (220-280 nm). We present the retrieved ozone profiles and compare them to the LMD Mars Global Climate Model (LMD-MGCM) results.Due to the photochemical reactivity of ozone, a classical comparison of local density profiles is not appropriate for solar occultations that are acquired at the terminator, and we present here a method often used in the Earth community. The principal comparison is made via the slant profiles (integrated ozone concentration on the line-of-sight), since the spherical symmetry hypothesis made in the onion-peeling vertical inversion method is not valid for photochemically active species (e.g., ozone) around terminator. For each occultation, we model the ozone vertical and horizontal distribution with high solar zenith angle (or local time) resolution around the terminator and then integrate the model results following the lines-of-sight of the occultation to construct the modeled slant profile. We will also discuss the difference of results between the above comparison method and a comparison using the local density profiles, i.e., the observed ones inverted by using the spherical symmetry hypothesis and the modeled ones extracted from the LMD-MGCM exactly at the terminator. The method and the results will be presented together with the full dataset.SPICAM is funded by the French Space Agency CNES and this work has received funding from the European Union's Horizon 2020 Programme (H2020-Compet-08-2014) under grant agreement UPWARDS-633127.

  9. Comparison of the scanned pages of the contractual documents

    NASA Astrophysics Data System (ADS)

    Andreeva, Elena; Arlazarov, Vladimir V.; Manzhikov, Temudzhin; Slavin, Oleg

    2018-04-01

    In this paper the problem statement is given to compare the digitized pages of the official papers. Such problem appears during the comparison of two customer copies signed at different times between two parties with a view to find the possible modifications introduced on the one hand. This problem is a practically significant in the banking sector during the conclusion of contracts in a paper format. The method of comparison based on the recognition, which consists in the comparison of two bag-of-words, which are the recognition result of the master and test pages, is suggested. The described experiments were conducted using the OCR Tesseract and the siamese neural network. The advantages of the suggested method are the steady operation of the comparison algorithm and the high exacting precision, and one of the disadvantages is the dependence on the chosen OCR.

  10. Research on dynamic characteristics of motor vibration isolation system through mechanical impedance method

    NASA Astrophysics Data System (ADS)

    Zhao, Xingqian; Xu, Wei; Shuai, Changgeng; Hu, Zechao

    2017-12-01

    A mechanical impedance model of a coupled motor-shaft-bearing system has been developed to predict the dynamic characteristics and partially validated by comparing the computing results with finite element method (FEM), including the comparison of displacement amplitude in x and z directions at the two ends of the flexible coupling, the comparison of normalized vertical reaction force in z direction at bearing pedestals. The results demonstrate that the developed model can precisely predict the dynamic characteristics and the main advantage of such a method is that it can clearly illustrate the vibration property of the motor subsystem, which plays an important role in the isolation system design.

  11. A survey of the broadband shock associated noise prediction methods

    NASA Technical Reports Server (NTRS)

    Kim, Chan M.; Krejsa, Eugene A.; Khavaran, Abbas

    1992-01-01

    Several different prediction methods to estimate the broadband shock associated noise of a supersonic jet are introduced and compared with experimental data at various test conditions. The nozzle geometries considered for comparison include a convergent and a convergent-divergent nozzle, both axisymmetric. Capabilities and limitations of prediction methods in incorporating the two nozzle geometries, flight effect, and temperature effect are discussed. Predicted noise field shows the best agreement for a convergent nozzle geometry under static conditions. Predicted results for nozzles in flight show larger discrepancies from data and more dependable flight data are required for further comparison. Qualitative effects of jet temperature, as observed in experiment, are reproduced in predicted results.

  12. Comparison of nine brands of membrane filter and the most-probable-number methods for total coliform enumeration in sewage-contaminated drinking water.

    PubMed Central

    Tobin, R S; Lomax, P; Kushner, D J

    1980-01-01

    Nine different brands of membrane filter were compared in the membrane filtration (MF) method, and those with the highest yields were compared against the most-probable-number (MPN) multiple-tube method for total coliform enumeration in simulated sewage-contaminated tap water. The water was chlorinated for 30 min to subject the organisms to stresses similar to those encountered during treatment and distribution of drinking water. Significant differences were observed among membranes in four of the six experiments, with two- to four-times-higher recoveries between the membranes at each extreme of recovery. When results from the membranes with the highest total coliform recovery rate were compared with the MPN results, the MF results were found significantly higher in one experiment and equivalent to the MPN results in the other five experiments. A comparison was made of the species enumerated by these methods; in general the two methods enumerated a similar spectrum of organisms, with some indication that the MF method was subject to greater interference by Aeromonas. PMID:7469407

  13. Application of the Refined Integral Method in the mathematical modeling of drug delivery from one-layer torus-shaped devices.

    PubMed

    Helbling, Ignacio M; Ibarra, Juan C D; Luna, Julio A

    2012-02-28

    A mathematical modeling of controlled release of drug from one-layer torus-shaped devices is presented. Analytical solutions based on Refined Integral Method (RIM) are derived. The validity and utility of the model are ascertained by comparison of the simulation results with matrix-type vaginal rings experimental release data reported in the literature. For the comparisons, the pair-wise procedure is used to measure quantitatively the fit of the theoretical predictions to the experimental data. A good agreement between the model prediction and the experimental data is observed. A comparison with a previously reported model is also presented. More accurate results are achieved for small A/C(s) ratios. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Density functional theory calculations of 95Mo NMR parameters in solid-state compounds.

    PubMed

    Cuny, Jérôme; Furet, Eric; Gautier, Régis; Le Pollès, Laurent; Pickard, Chris J; d'Espinose de Lacaillerie, Jean-Baptiste

    2009-12-21

    The application of periodic density functional theory-based methods to the calculation of (95)Mo electric field gradient (EFG) and chemical shift (CS) tensors in solid-state molybdenum compounds is presented. Calculations of EFG tensors are performed using the projector augmented-wave (PAW) method. Comparison of the results with those obtained using the augmented plane wave + local orbitals (APW+lo) method and with available experimental values shows the reliability of the approach for (95)Mo EFG tensor calculation. CS tensors are calculated using the recently developed gauge-including projector augmented-wave (GIPAW) method. This work is the first application of the GIPAW method to a 4d transition-metal nucleus. The effects of ultra-soft pseudo-potential parameters, exchange-correlation functionals and structural parameters are precisely examined. Comparison with experimental results allows the validation of this computational formalism.

  15. Objective comparison of particle tracking methods.

    PubMed

    Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R; Godinez, William J; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E G; Jaldén, Joakim; Blau, Helen M; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P; Dan, Han-Wei; Tsai, Yuh-Show; Ortiz de Solórzano, Carlos; Olivo-Marin, Jean-Christophe; Meijering, Erik

    2014-03-01

    Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.

  16. LETTER TO THE EDITOR: Free-response operator characteristic models for visual search

    NASA Astrophysics Data System (ADS)

    Hutchinson, T. P.

    2007-05-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) is a novel x-ray phase-contrast computed tomography which is applied to inspect weakly absorbing low-Z samples. Refraction-angle images which are extracted from a series of raw DEI images measured in different positions of the rocking curve of the analyser can be regarded as projections of DEI-CT. Based on them, the distribution of refractive index decrement in the sample can be reconstructed according to the principles of CT. How to combine extraction methods and reconstruction algorithms to obtain the most accurate reconstructed results is investigated in detail in this paper. Two kinds of comparison, the comparison of different extraction methods and the comparison between 'two-step' algorithms and the Hilbert filtered backprojection (HFBP) algorithm, draw the conclusion that the HFBP algorithm based on the maximum refraction-angle (MRA) method may be the best combination at present. Though all current extraction methods including the MRA method are approximate methods and cannot calculate very large refraction-angle values, the HFBP algorithm based on the MRA method is able to provide quite acceptable estimations of the distribution of refractive index decrement of the sample. The conclusion is proved by the experimental results at the Beijing Synchrotron Radiation Facility.

  17. Two laboratory methods for the calibration of GPS speed meters

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-01-01

    The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.

  18. Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients

    PubMed Central

    Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan

    2016-01-01

    Aim: The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. Material and Methods: The study was conducted at the Be’sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Results: Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Conclusion: Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable. PMID:27703289

  19. Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains

    NASA Astrophysics Data System (ADS)

    Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.

    2004-07-01

    Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.

  20. Sample integrity evaluation and EPA method 325B interlaboratory comparison for select volatile organic compounds collected diffusively on Carbopack X sorbent tubes

    NASA Astrophysics Data System (ADS)

    Oliver, Karen D.; Cousett, Tamira A.; Whitaker, Donald A.; Smith, Luther A.; Mukerjee, Shaibal; Stallings, Casson; Thoma, Eben D.; Alston, Lillian; Colon, Maribel; Wu, Tai; Henkle, Stacy

    2017-08-01

    A sample integrity evaluation and an interlaboratory comparison were conducted in application of U.S. Environmental Protection Agency (EPA) Methods 325A and 325B for diffusively monitoring benzene and other selected volatile organic compounds (VOCs) using Carbopack X sorbent tubes. To evaluate sample integrity, VOC samples were refrigerated for up to 240 days and analyzed using thermal desorption/gas chromatography-mass spectrometry at the EPA Office of Research and Development laboratory in Research Triangle Park, NC, USA. For the interlaboratory comparison, three commercial analytical laboratories were asked to follow Method 325B when analyzing samples of VOCs that were collected in field and laboratory settings for EPA studies. Overall results indicate that the selected VOCs collected diffusively on sorbent tubes generally were stable for 6 months or longer when samples were refrigerated. This suggests the specified maximum 30-day storage time of VOCs collected diffusively on Carbopack X passive samplers and analyzed using Method 325B might be able to be relaxed. Interlaboratory comparison results were in agreement for the challenge samples collected diffusively in an exposure chamber in the laboratory, with most measurements within ±25% of the theoretical concentration. Statistically significant differences among laboratories for ambient challenge samples were small, less than 1 part per billion by volume (ppbv). Results from all laboratories exhibited good precision and generally agreed well with each other.

  1. Comparison of methods for localizing the source position of deauthentication attacks on WAP 802.11n using Chanalyzer and Wi-Spy 2.4x

    NASA Astrophysics Data System (ADS)

    Bahaweres, R. B.; Mokoginta, S.; Alaydrus, M.

    2017-01-01

    This paper descnbes a comparison of three methods used to locate the position of the source of deauthentication attacks on Wi-Fi using Chanalyzer, and Wi-Spy 2.4x adapter. The three methods are wardriving, absorption and trilateration. The position of constant deauthentication attacks is more easily analyzed compared to that of random attacks. Signal propagation may provide a comparison between signal strength and distance which makes the position of attackers more easily located. The results are shown on the chart patterns generated from the Received Signal Strength Indicator (RSS). And it is proven that these three methods can be used to localize the position of attackers, and can be recommended for use in the environment of organizations using Wi-Fi.

  2. Role and Evaluation of Interlaboratory Comparison Results in Laboratory Accreditation

    NASA Astrophysics Data System (ADS)

    Bode, P.

    2008-08-01

    Participation in interlaboratory comparisons provides laboratories an opportunity for independent assessment of their analytical performance, both in absolute way and in comparison with those by other techniques. However, such comparisons are hindered by differences in the way laboratories participate, e.g. at best measurement capability or under routine conditions. Neutron activation analysis laboratories, determining total mass fractions, often see themselves classified as `outliers' since the majority of other participants employ techniques with incomplete digestion methods. These considerations are discussed in relation to the way results from interlaboratory comparisons are evaluated by accreditation bodies following the requirements of Clause 5.9.1 of the ISO/IEC 17025:2005. The discussion and conclusions come largely forth from experiences in the author's own laboratory.

  3. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  4. On the comparison of perturbation-iteration algorithm and residual power series method to solve fractional Zakharov-Kuznetsov equation

    NASA Astrophysics Data System (ADS)

    Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei

    2018-06-01

    In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.

  5. The Use and Abuse of Limits of Detection in Environmental Analytical Chemistry

    PubMed Central

    Brown, Richard J. C.

    2008-01-01

    The limit of detection (LoD) serves as an important method performance measure that is useful for the comparison of measurement techniques and the assessment of likely signal to noise performance, especially in environmental analytical chemistry. However, the LoD is only truly related to the precision characteristics of the analytical instrument employed for the analysis and the content of analyte in the blank sample. This article discusses how other criteria, such as sampling volume, can serve to distort the quoted LoD artificially and make comparison between various analytical methods inequitable. In order to compare LoDs between methods properly, it is necessary to state clearly all of the input parameters relating to the measurements that have been used in the calculation of the LoD. Additionally, the article discusses that the use of LoDs in contexts other than the comparison of the attributes of analytical methods, in particular when reporting analytical results, may be confusing, less informative than quoting the actual result with an accompanying statement of uncertainty, and may act to bias descriptive statistics. PMID:18690384

  6. A comparison of methods for evaluating structure during ship collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ammerman, D.J.; Daidola, J.C.

    1996-10-01

    A comparison is provided of the results of various methods for evaluating structure during a ship-to-ship collision. The baseline vessel utilized in the analyses is a 67.4 meter in length displacement hull struck by an identical vessel traveling at speeds ranging from 10 to 30 knots. The structural response of the struck vessel and motion of both the struck and striking vessels are assessed by finite element analysis. These same results are then compared to predictions utilizing the {open_quotes}Tanker Structural Analysis for Minor Collisions{close_quotes} (TSAMC) Method, the Minorsky Method, the Haywood Collision Process, and comparison to full-scale tests. Consideration ismore » given to the nature of structural deformation, absorbed energy, penetration, rigid body motion, and virtual mass affecting the hydrodynamic response. Insights are provided with regard to the calibration of the finite element model which was achievable through utilizing the more empirical analyses and the extent to which the finite element analysis is able to simulate the entire collision event. 7 refs., 8 figs., 4 tabs.« less

  7. Comparison of three nondestructive and contactless techniques for investigations of recombination parameters on an example of silicon samples

    NASA Astrophysics Data System (ADS)

    Chrobak, Ł.; Maliński, M.

    2018-06-01

    This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.

  8. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    PubMed

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  9. Prediction of the thermal environment and thermal response of simple panels exposed to radiant heat

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Ash, Robert L.

    1989-01-01

    A method of predicting the radiant heat flux distribution produced by a bank of tubular quartz heaters was applied to a radiant system consisting of a single unreflected lamp irradiating a flat metallic incident surface. In this manner, the method was experimentally verified for various radiant system parameter settings and used as a source of input for a finite element thermal analysis. Two finite element thermal analyses were applied to a thermal system consisting of a thin metallic panel exposed to radiant surface heating. A two-dimensional steady-state finite element thermal analysis algorithm, based on Galerkin's Method of Weighted Residuals (GFE), was formulated specifically for this problem and was used in comparison to the thermal analyzers of the Engineering Analysis Language (EAL). Both analyses allow conduction, convection, and radiation boundary conditions. Differences in the respective finite element formulation are discussed in terms of their accuracy and resulting comparison discrepancies. The thermal analyses are shown to perform well for the comparisons presented here with some important precautions about the various boundary condition models. A description of the experiment, corresponding analytical modeling, and resulting comparisons are presented.

  10. A Comparison of Ultrasound Tomography Methods in Circular Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leach, R R; Azevedo, S G; Berryman, J G

    2002-01-24

    Extremely high quality data was acquired using an experimental ultrasound scanner developed at Lawrence Livermore National Laboratory using a 2D ring geometry with up to 720 transmitter/receiver transducer positions. This unique geometry allows reflection and transmission modes and transmission imaging and quantification of a 3D volume using 2D slice data. Standard image reconstruction methods were applied to the data including straight-ray filtered back projection, reflection tomography, and diffraction tomography. Newer approaches were also tested such as full wave, full wave adjoint method, bent-ray filtered back projection, and full-aperture tomography. A variety of data sets were collected including a formalin-fixed humanmore » breast tissue sample, a commercial ultrasound complex breast phantom, and cylindrical objects with and without inclusions. The resulting reconstruction quality of the images ranges from poor to excellent. The method and results of this study are described including like-data reconstructions produced by different algorithms with side-by-side image comparisons. Comparisons to medical B-scan and x-ray CT scan images are also shown. Reconstruction methods with respect to image quality using resolution, noise, and quantitative accuracy, and computational efficiency metrics will also be discussed.« less

  11. Comparison of direct and indirect methods of estimating health state utilities for resource allocation: review and empirical analysis.

    PubMed

    Arnold, David; Girling, Alan; Stevens, Andrew; Lilford, Richard

    2009-07-22

    Utilities (values representing preferences) for healthcare priority setting are typically obtained indirectly by asking patients to fill in a quality of life questionnaire and then converting the results to a utility using population values. We compared such utilities with those obtained directly from patients or the public. Review of studies providing both a direct and indirect utility estimate. Papers reporting comparisons of utilities obtained directly (standard gamble or time tradeoff) or indirectly (European quality of life 5D [EQ-5D], short form 6D [SF-6D], or health utilities index [HUI]) from the same patient. PubMed and Tufts database of utilities. Sign test for paired comparisons between direct and indirect utilities; least squares regression to describe average relations between the different methods. Mean utility scores (or median if means unavailable) for each method, and differences in mean (median) scores between direct and indirect methods. We found 32 studies yielding 83 instances where direct and indirect methods could be compared for health states experienced by adults. The direct methods used were standard gamble in 57 cases and time trade off in 60(34 used both); the indirect methods were EQ-5D (67 cases), SF-6D (13), HUI-2 (5), and HUI-3 (37). Mean utility values were 0.81 (standard gamble) and 0.77 (time tradeoff) for the direct methods; for the indirect methods: 0.59(EQ-5D), 0.63 (SF-6D), 0.75 (HUI-2) and 0.68 (HUI-3). Direct methods of estimating utilities tend to result in higher health ratings than the more widely used indirect methods, and the difference can be substantial.Use of indirect methods could have important implications for decisions about resource allocation: for example, non-lifesaving treatments are relatively more favoured in comparison with lifesaving interventions than when using direct methods.

  12. Cross-continental comparison of national food consumption survey methods--a narrative review

    USDA-ARS?s Scientific Manuscript database

    Food consumption surveys are performed in many countries. Comparison of results from those surveys across nations is difficult because of differences in methodological approaches. While consensus about the preferred methodology associated with national food consumption surveys is increasing, no in...

  13. Parallelization of the FLAPW method and comparison with the PPW method

    NASA Astrophysics Data System (ADS)

    Canning, Andrew; Mannstadt, Wolfgang; Freeman, Arthur

    2000-03-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. In the past the FLAPW method has been limited to systems of about a hundred atoms due to the lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell running on up to 512 processors on a Cray T3E parallel supercomputer. Some results will also be presented on a comparison of the plane-wave pseudopotential method and the FLAPW method on large systems.

  14. Evaluation of advanced regenerator systems

    NASA Technical Reports Server (NTRS)

    Cook, J. A.; Fucinari, C. A.; Lingscheit, J. N.; Rahnke, C. J.

    1978-01-01

    The major considerations are discussed which will affect the selection of a ceramic regenerative heat exchanger for an improved 100 HP automotive gas turbine engine. The regenerator considered for this application is about 36cm in diameter. Regenerator comparisons are made on the basis of material, method of fabrication, cost, and performance. A regenerator inlet temperature of 1000 C is assumed for performance comparisons, and laboratory test results are discussed for material comparisons at 1100 and 1200 C. Engine test results using the Ford 707 industrial gas turbine engine are also discussed.

  15. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  16. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    PubMed

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  17. Experimental comparison between performance of the PM and LPM methods in computed radiography

    NASA Astrophysics Data System (ADS)

    Kermani, Aboutaleb; Feghhi, Seyed Amir Hossein; Rokrok, Behrouz

    2018-07-01

    The scatter downgrades the image quality and reduces its information efficiency in quantitative measurement usages when creating projections with ionizing radiation. Therefore, the variety of methods have been applied for scatter reduction and correction of the undesirable effects. As new approaches, the ordinary and localized primary modulation methods have already been used individually through experiments and simulations in medical and industrial computed tomography, respectively. The aim of this study is the evaluation of capabilities and limitations of these methods in comparison with each other. For this mean, the ordinary primary modulation has been implemented in computed radiography for the first time and the potential of both methods has been assessed in thickness measurement as well as scatter to primary signal ratio determination. The comparison results, based on the experimental outputs which obtained using aluminum specimens and continuous X-ray spectra, are to the benefit of the localized primary modulation method because of improved accuracy and higher performance especially at the edges.

  18. Indirect Comparisons: A Review of Reporting and Methodological Quality

    PubMed Central

    Donegan, Sarah; Williamson, Paula; Gamble, Carrol; Tudur-Smith, Catrin

    2010-01-01

    Background The indirect comparison of two interventions can be valuable in many situations. However, the quality of an indirect comparison will depend on several factors including the chosen methodology and validity of underlying assumptions. Published indirect comparisons are increasingly more common in the medical literature, but as yet, there are no published recommendations of how they should be reported. Our aim is to systematically review the quality of published indirect comparisons to add to existing empirical data suggesting that improvements can be made when reporting and applying indirect comparisons. Methodology/Findings Reviews applying statistical methods to indirectly compare the clinical effectiveness of two interventions using randomised controlled trials were eligible. We searched (1966–2008) Database of Abstracts and Reviews of Effects, The Cochrane library, and Medline. Full review publications were assessed for eligibility. Specific criteria to assess quality were developed and applied. Forty-three reviews were included. Adequate methodology was used to calculate the indirect comparison in 41 reviews. Nineteen reviews assessed the similarity assumption using sensitivity analysis, subgroup analysis, or meta-regression. Eleven reviews compared trial-level characteristics. Twenty-four reviews assessed statistical homogeneity. Twelve reviews investigated causes of heterogeneity. Seventeen reviews included direct and indirect evidence for the same comparison; six reviews assessed consistency. One review combined both evidence types. Twenty-five reviews urged caution in interpretation of results, and 24 reviews indicated when results were from indirect evidence by stating this term with the result. Conclusions This review shows that the underlying assumptions are not routinely explored or reported when undertaking indirect comparisons. We recommend, therefore, that the quality of indirect comparisons should be improved, in particular, by assessing assumptions and reporting the assessment methods applied. We propose that the quality criteria applied in this article may provide a basis to help review authors carry out indirect comparisons and to aid appropriate interpretation. PMID:21085712

  19. Treatment of fingertip amputation: comparison of results between microsurgical replantation and pocket principle.

    PubMed

    Yabe, Tetsuji; Tsuda, Tomoyuki; Hirose, Shunsuke; Ozawa, Toshiyuki

    2012-05-01

    In this article, a comparison of replantation using microsurgical replantation (replantation) and the Brent method and its modification (pocket principle) in the treatment of fingertip amputation is reported. As a classification of amputation level, we used Ishikawa's subzone classification of fingertip amputation, and the cases of amputations only in subzone 2 were included in this study. Between these two groups, there was no statistical difference in survival rate, postoperative atrophy, or postoperative range of motion. In terms of sensory recovery, some records were lost and exact study was difficult. But there was no obvious difference between these cases. In our comparison of microsurgical replantation versus the pocket principle in treatment of subzone 2 fingertip amputation, there was no difference in postoperative results. Each method has pros and cons, and the surgeon should choose which technique to use based on his or her understanding of the characteristics of both methods. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Boar taint detection: A comparison of three sensory protocols.

    PubMed

    Trautmann, Johanna; Meier-Dinkel, Lisa; Gertheiss, Jan; Mörlein, Daniel

    2016-01-01

    While recent studies state an important role of human sensory methods for daily routine control of so-called boar taint, the evaluation of different heating methods is still incomplete. This study investigated three common heating methods (microwave (MW), hot-water (HW), hot-iron (HI)) for boar fat evaluation. The comparison was carried out on 72 samples with a 10-person sensory panel. The heating method significantly affected the probability of a deviant rating. Compared to an assumed 'gold standard' (chemical analysis), the performance was best for HI when both sensitivity and specificity were considered. The results show the superiority of the panel result compared to individual assessors. However, the consistency of the individual sensory ratings was not significantly different between MW, HW, and HI. The three protocols showed only fair to moderate agreement. Concluding from the present results, the hot-iron method appears to be advantageous for boar taint evaluation as compared to microwave and hot-water. Copyright © 2015. Published by Elsevier Ltd.

  1. Comparison of 13 equations for determining evapotranspiration from a prairie wetland, Cottonwood Lake Area, North Dakota, USA

    USGS Publications Warehouse

    Rosenberry, Donald O.; Stannard, David L.; Winter, Thomas C.; Martinez, Margo L.

    2004-01-01

    Evapotranspiration determined using the energy-budget method at a semi-permanent prairie-pothole wetland in east-central North Dakota, USA was compared with 12 other commonly used methods. The Priestley-Taylor and deBruin-Keijman methods compared best with the energy-budget values; mean differences were less than 0.1 mm d−1, and standard deviations were less than 0.3 mm d−1. Both methods require measurement of air temperature, net radiation, and heat storage in the wetland water. The Penman, Jensen-Haise, and Brutsaert-Stricker methods provided the next-best values for evapotranspiration relative to the energy-budget method. The mass-transfer, deBruin, and Stephens-Stewart methods provided the worst comparisons; the mass-transfer and deBruin comparisons with energy-budget values indicated a large standard deviation, and the deBruin and Stephens-Stewart comparisons indicated a large bias. The Jensen-Haise method proved to be cost effective, providing relatively accurate comparisons with the energy-budget method (mean difference=0.44 mm d−1, standard deviation=0.42 mm d−1) and requiring only measurements of air temperature and solar radiation. The Mather (Thornthwaite) method is the simplest, requiring only measurement of air temperature, and it provided values that compared relatively well with energy-budget values (mean difference=0.47 mm d−1, standard deviation=0.56 mm d−1). Modifications were made to several of the methods to make them more suitable for use in prairie wetlands. The modified Makkink, Jensen-Haise, and Stephens-Stewart methods all provided results that were nearly as close to energy-budget values as were the Priestley-Taylor and deBruin-Keijman methods, and all three of these modified methods only require measurements of air temperature and solar radiation. The modified Hamon method provided values that were within 20 percent of energy-budget values during 95 percent of the comparison periods, and it only requires measurement of air temperature. The mass-transfer coefficient, associated with the commonly used mass-transfer method, varied seasonally, with the largest values occurring during summer.

  2. Comparison of the various methods for the direct calculation of the transmission functions of the 15-micron CO2 band with experimental data

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Various methods for calculating the transmission functions of the 15 micron CO2 band are described. The results of these methods are compared with laboratory measurements. It is found that program P4 provides the best agreement with experimental results on the average.

  3. Comparison of Nonoverlap Methods for Identifying Treatment Effect in Single-Subject Experimental Research

    ERIC Educational Resources Information Center

    Rakap, Salih; Snyder, Patricia; Pasia, Cathleen

    2014-01-01

    Debate is occurring about which result interpretation aides focused on examining the experimental effect should be used in single-subject experimental research. In this study, we examined seven nonoverlap methods and compared results using each method to judgments of two visual analysts. The data sources for the present study were 36 studies…

  4. Query3d: a new method for high-throughput analysis of functional residues in protein structures

    PubMed Central

    Ausiello, Gabriele; Via, Allegra; Helmer-Citterich, Manuela

    2005-01-01

    Background The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses. Results Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results. Conclusion With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface. PMID:16351754

  5. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures

    PubMed Central

    Manolakos, Elias S.

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub. PMID:26605332

  6. Efficient Multicriteria Protein Structure Comparison on Modern Processor Architectures.

    PubMed

    Sharma, Anuj; Manolakos, Elias S

    2015-01-01

    Fast increasing computational demand for all-to-all protein structures comparison (PSC) is a result of three confounding factors: rapidly expanding structural proteomics databases, high computational complexity of pairwise protein comparison algorithms, and the trend in the domain towards using multiple criteria for protein structures comparison (MCPSC) and combining results. We have developed a software framework that exploits many-core and multicore CPUs to implement efficient parallel MCPSC in modern processors based on three popular PSC methods, namely, TMalign, CE, and USM. We evaluate and compare the performance and efficiency of the two parallel MCPSC implementations using Intel's experimental many-core Single-Chip Cloud Computer (SCC) as well as Intel's Core i7 multicore processor. We show that the 48-core SCC is more efficient than the latest generation Core i7, achieving a speedup factor of 42 (efficiency of 0.9), making many-core processors an exciting emerging technology for large-scale structural proteomics. We compare and contrast the performance of the two processors on several datasets and also show that MCPSC outperforms its component methods in grouping related domains, achieving a high F-measure of 0.91 on the benchmark CK34 dataset. The software implementation for protein structure comparison using the three methods and combined MCPSC, along with the developed underlying rckskel algorithmic skeletons library, is available via GitHub.

  7. Results of interlaboratory comparison of fission track ages for 1992 fission track workshop

    USGS Publications Warehouse

    Miller, D.S.; Crowley, K.D.; Dokka, R.K.; Galbraith, R.F.; Kowallis, B.J.; Naeser, C.W.

    1993-01-01

    Two apatites and one sphene were made available to the fission track research community for analysis prior to the 1992 Fission Track Workshop held in Philadelphia, U.S.A., 13-17 July. Eighteen laboratories throughout the world received aliquots of apatite and sphene. To date, analyses by 33 different scientists have been representing 15 different laboratories. With respect to the previous two interlaboratory comparisons, there is a noticeable improvement in the accuracy of the age results (Naeser and Cebula, 1978; Naeser et al., 1981; Miller et al., 1985;Miller et al.1990). Ninety-four percent of the analysis used the external detector method (EDM) combined with the zeta technique while the remaining individuals used the population method (POP). Track length measurements (requested for the first time in the interlaboratory comparison studies) were in relatively good agreement. ?? 1993.

  8. A comparison of electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry for flow measurements

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Stricker, J.

    1985-01-01

    Electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry are compared as methods for the accurate measurement of refractive index and density change distributions of phase objects. Experimental results are presented to show that the two methods have comparable accuracy for measuring the first derivative of the interferometric fringe shift. The phase object for the measurements is a large crystal of KD*P, whose refractive index distribution can be changed accurately and repeatably for the comparison. Although the refractive index change causes only about one interferometric fringe shift over the entire crystal, the derivative shows considerable detail for the comparison. As electronic phase measurement methods, both methods are very accurate and are intrinsically compatible with computer controlled readout and data processing. Heterodyne moire is relatively inexpensive and has high variable sensitivity. Heterodyne holographic interferometry is better developed, and can be used with poor quality optical access to the experiment.

  9. Drivers' biased perceptions of speed and safety campaign messages.

    PubMed

    Walton, D; McKeown, P C

    2001-09-01

    One hundred and thirteen drivers were surveyed for their perceptions of driving speed to compare self-reported average speed, perceived average-other speed and the actual average speed, in two conditions (50 and 100 kph zones). These contrasts were used to evaluate whether public safety messages concerning speeding effectively reach their target audience. Evidence is presented supporting the hypothesis that drivers who have a biased perception of their own speed relative to others are more likely to ignore advertising campaigns encouraging people not to speed. A method of self-other-actual comparisons detects biased perceptions when the standard method of self-other comparison does not. In particular, drivers exaggerate the perceived speed of others and this fact is masked using traditional methods. The method of manipulation is proposed as a way to evaluate the effect of future advertising campaigns, and a strategy for such campaigns is proposed based on the results of the self-other comparisons.

  10. Headspace profiling of cocaine samples for intelligence purposes.

    PubMed

    Dujourdy, Laurence; Besacier, Fabrice

    2008-08-06

    A method for determination of residual solvents in illicit hydrochloride cocaine samples using static headspace-gas chromatography (HS-GC) associated with a storage computerized procedure is described for the profiling and comparison of seizures. The system involves a gas chromatographic separation of 18 occluded solvents followed by fully automatic data analysis and transfer to a PHP/MySQL database. First, a fractional factorial design was used to evaluate the main effects of some critical method parameters (salt choice, vial agitation intensity, oven temperature, pressurization and loop equilibration) on the results with a minimum of experiments. The method was then validated for tactical intelligence purposes (batch comparison) via several studies: selection of solvents and mathematical comparison tool, reproducibility and "cutting" influence studies. The decision threshold to determine the similarity of two samples was set and false positives and negatives evaluated. Finally, application of the method to distinguish geographical origins is discussed.

  11. COOMET pilot comparison 473/RU-a/09: Comparison of hydrophone calibrations in the frequency range 250 Hz to 200 kHz

    NASA Astrophysics Data System (ADS)

    Yi, Chen; Isaev, A. E.; Yuebing, Wang; Enyakov, A. M.; Teng, Fei; Matveev, A. N.

    2011-01-01

    A description is given of the COOMET project 473/RU-a/09: a pilot comparison of hydrophone calibrations at frequencies from 250 Hz to 200 kHz between Hangzhou Applied Acoustics Research Institute (HAARI, China)—pilot laboratory—and Russian National Research Institute for Physicotechnical and Radio Engineering Measurements (VNIIFTRI, Designated Institute of Russia of the CIPM MRA). Two standard hydrophones, B&K 8104 and TC 4033, were calibrated and compared to assess the current state of hydrophone calibration of HAARI (China) and Russia. Three different calibration methods were applied: a vibrating column method, a free-field reciprocity method and a comparison method. The standard facilities of each laboratory were used, and three different sound fields were applied: pressure field, free-field and reverberant field. The maximum deviation of the sensitivities of two hydrophones between the participants' results was 0.36 dB. Main text. To reach the main text of this paper, click on Final Report. The final report has been peer-reviewed and approved for publication by the CCAUV-KCWG.

  12. Quantification of Ice Accretions for Icing Scaling Evaluations

    NASA Technical Reports Server (NTRS)

    Ruff, Gary A.; Anderson, David N.

    2003-01-01

    The comparison of ice accretion characteristics is an integral part of aircraft icing research. It is often necessary to compare an ice accretion obtained from a flight test or numerical simulation to one produced in an icing wind tunnel or for validation of an icing scaling method. Traditionally, this has been accomplished by overlaying two-dimensional tracings of ice accretion shapes. This paper addresses the basic question of how to compare ice accretions using more quantitative methods. For simplicity, geometric characteristics of the ice accretions are used for the comparison. One method evaluated is a direct comparison of the percent differences of the geometric measurements. The second method inputs these measurements into a fuzzy inference system to obtain a single measure of the goodness of the comparison. The procedures are demonstrated by comparing ice shapes obtained in the Icing Research Tunnel at NASA Glenn Research Center during recent icing scaling tests. The results demonstrate that this type of analysis is useful in quantifying the similarity of ice accretion shapes and that the procedures should be further developed by expanding the analysis to additional icing data sets.

  13. Objective comparison of particle tracking methods

    PubMed Central

    Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F.; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R.; Godinez, William J.; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L.; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P.; Dan, Han-Wei; Tsai, Yuh-Show; de Solórzano, Carlos Ortiz; Olivo-Marin, Jean-Christophe; Meijering, Erik

    2014-01-01

    Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Since manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized, for the first time, an open competition, in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to important practical conclusions for users and developers. PMID:24441936

  14. Computation of viscous blast wave flowfields

    NASA Technical Reports Server (NTRS)

    Atwood, Christopher A.

    1991-01-01

    A method to determine unsteady solutions of the Navier-Stokes equations was developed and applied. The structural finite-volume, approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the interaction of blast-waves with stationary targets. The inviscid flux is evaluated using MacCormack's modified Steger-Warming flux or Roe flux difference splittings with total variation diminishing limiters, while the viscous flux is computed using central differences. The use of implicit boundary conditions in conjunction with a telescoping in time and space method permitted solutions to this strongly unsteady class of problems. Comparisons of numerical, analytical, and experimental results were made in two and three dimensions. These comparisons revealed accurate wave speed resolution with nonoscillatory discontinuity capturing. The purpose of this effort was to address the three-dimensional, viscous blast-wave problem. Test cases were undertaken to reveal these methods' weaknesses in three regimes: (1) viscous-dominated flow; (2) complex unsteady flow; and (3) three-dimensional flow. Comparisons of these computations to analytic and experimental results provided initial validation of the resultant code. Addition details on the numerical method and on the validation can be found in the appendix. Presently, the code is capable of single zone computations with selection of any permutation of solid wall or flow-through boundaries.

  15. Comparison of linear and non-linear method in estimating the sorption isotherm parameters for safranin onto activated carbon.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-08-31

    Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  16. Comparison between two lidar methods to retrieve microphysical properties of liquid-water clouds

    NASA Astrophysics Data System (ADS)

    Jimenez, Cristofer; Ansmann, Albert; Donovan, David; Engelmann, Ronny; Schmidt, Jörg; Wandinger, Ulla

    2018-04-01

    Since 2010, the Raman dual-FOV lidar system permits the retrieval of microphysical properties of liquid-water clouds during nighttime. A new robust lidar depolarization approach was recently introduced, which permits the retrieval of these properties as well, with high temporal resolution and during daytime. To implement this approach, the lidar system was upgraded, by adding a three channel depolarization receiver. The first preliminary retrieval results and a comparison between both methods is presented.

  17. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  18. Quantitative comparison of alternative methods for coarse-graining biological networks

    PubMed Central

    Bowman, Gregory R.; Meng, Luming; Huang, Xuhui

    2013-01-01

    Markov models and master equations are a powerful means of modeling dynamic processes like protein conformational changes. However, these models are often difficult to understand because of the enormous number of components and connections between them. Therefore, a variety of methods have been developed to facilitate understanding by coarse-graining these complex models. Here, we employ Bayesian model comparison to determine which of these coarse-graining methods provides the models that are most faithful to the original set of states. We find that the Bayesian agglomerative clustering engine and the hierarchical Nyström expansion graph (HNEG) typically provide the best performance. Surprisingly, the original Perron cluster cluster analysis (PCCA) method often provides the next best results, outperforming the newer PCCA+ method and the most probable paths algorithm. We also show that the differences between the models are qualitatively significant, rather than being minor shifts in the boundaries between states. The performance of the methods correlates well with the entropy of the resulting coarse-grainings, suggesting that finding states with more similar populations (i.e., avoiding low population states that may just be noise) gives better results. PMID:24089717

  19. Paradigm Diagnostics Salmonella Indicator Broth (PDX-SIB) for detection of Salmonella on selected environmental surfaces.

    PubMed

    Olstein, Alan; Griffith, Leena; Feirtag, Joellen; Pearson, Nicole

    2013-01-01

    The Paradigm Diagnostics Salmonella Indicator Broth (PDX-SIB) is intended as a single-step selective enrichment indicator broth to be used as a simple screening test for the presence of Salmonella spp. in environmental samples. This method permits the end user to avoid multistep sample processing to identify presumptively positive samples, as exemplified by standard U.S. reference methods. PDX-SIB permits the outgrowth of Salmonella while inhibiting the growth of competitive Gram-negative and -positive microflora. Growth of Salmonella-positive cultures results in a visual color change of the medium from purple to yellow when the sample is grown at 37 +/- 1 degree C. Performance of PDX-SIB has been evaluated in five different categories: inclusivity-exclusivity, methods comparison, ruggedness, lot-to-lot variability, and shelf stability. The inclusivity panel included 100 different Salmonella serovars, 98 of which were SIB-positive during the 30 to 48 h incubation period. The exclusivity panel included 33 different non-Salmonella microorganisms, 31 of which were SIB-negative during the incubation period. Methods comparison studies included four different surfaces: S. Newport on plastic, S. Anatum on sealed concrete, S. Abaetetuba on ceramic tile, and S. Typhimurium in the presence of 1 log excess of Citrobacter freundii. Results of the methods comparison studies demonstrated no statistical difference between the SIB method and the U.S. Food and Drug Administration-Bacteriological Analytical Manual reference method, as measured by the Mantel-Haenszel Chi-square test. Ruggedness studies demonstrated little variation in test results when SIB incubation temperatures were varied over a 34-40 degrees C range. Lot-to-lot consistency results suggest no detectable differences in manufactured goods using two reference Salmonella serovars and one non-Salmonella microorganism.

  20. Comparison of indoor air sampling and dust collection methods for fungal exposure assessment using quantitative PCR

    EPA Science Inventory

    Evaluating fungal contamination indoors is complicated because of the many different sampling methods utilized. In this study, fungal contamination was evaluated using five sampling methods and four matrices for results. The five sampling methods were a 48 hour indoor air sample ...

  1. AN ENZYME LINKED IMMUNOSORBENT ASSAY (ELISA) METHOD FOR MONITORING 2,4 DICHLOROPHENOXYACETIC ACID (2,4-D) EXPOSURES

    EPA Science Inventory

    Abstract describes a streamlined ELISA method developed to quantitatively measure 2,4-D in human urine samples. Method development steps and comparison with gas chromatography/mass spectrometry are presented. Results indicated that the ELISA method could be used as a high throu...

  2. SELECTING SITES FOR COMPARISON WITH CREATED WETLANDS

    EPA Science Inventory

    The paper describes the method used for selecting natural wetlands to compare with created wetlands. The results of the selection process and the advantages and disadvantages of the method are discussed. The random site selection method required extensive field work and may have ...

  3. Solid Propellant Test Motor Scaling

    DTIC Science & Technology

    2001-09-01

    50 Figure 40. Comparison of Measured and Calculated Strand and Small Motor Burning Rates for Fundamental Studies of HTPB /AP Smokeless...Propellants...................................... 51 Figure 41. Agreement Between 2x4 Motor and Strand Burning Rate Data for Non-aluminized HTPB /AP...58 Figure 51. Comparison Between Results Obtained with Ultrasonic Method and Standard

  4. Exploring Robust Methods for Evaluating Treatment and Comparison Groups in Chronic Care Management Programs

    PubMed Central

    Hamar, Brent; Bradley, Chastity; Gandy, William M.; Harrison, Patricia L.; Sidney, James A.; Coberley, Carter R.; Rula, Elizabeth Y.; Pope, James E.

    2013-01-01

    Abstract Evaluation of chronic care management (CCM) programs is necessary to determine the behavioral, clinical, and financial value of the programs. Financial outcomes of members who are exposed to interventions (treatment group) typically are compared to those not exposed (comparison group) in a quasi-experimental study design. However, because member assignment is not randomized, outcomes reported from these designs may be biased or inefficient if study groups are not comparable or balanced prior to analysis. Two matching techniques used to achieve balanced groups are Propensity Score Matching (PSM) and Coarsened Exact Matching (CEM). Unlike PSM, CEM has been shown to yield estimates of causal (program) effects that are lowest in variance and bias for any given sample size. The objective of this case study was to provide a comprehensive comparison of these 2 matching methods within an evaluation of a CCM program administered to a large health plan during a 2-year time period. Descriptive and statistical methods were used to assess the level of balance between comparison and treatment members pre matching. Compared with PSM, CEM retained more members, achieved better balance between matched members, and resulted in a statistically insignificant Wald test statistic for group aggregation. In terms of program performance, the results showed an overall higher medical cost savings among treatment members matched using CEM compared with those matched using PSM (-$25.57 versus -$19.78, respectively). Collectively, the results suggest CEM is a viable alternative, if not the most appropriate matching method, to apply when evaluating CCM program performance. (Population Health Management 2013;16:35–45) PMID:22788834

  5. Exploring robust methods for evaluating treatment and comparison groups in chronic care management programs.

    PubMed

    Wells, Aaron R; Hamar, Brent; Bradley, Chastity; Gandy, William M; Harrison, Patricia L; Sidney, James A; Coberley, Carter R; Rula, Elizabeth Y; Pope, James E

    2013-02-01

    Evaluation of chronic care management (CCM) programs is necessary to determine the behavioral, clinical, and financial value of the programs. Financial outcomes of members who are exposed to interventions (treatment group) typically are compared to those not exposed (comparison group) in a quasi-experimental study design. However, because member assignment is not randomized, outcomes reported from these designs may be biased or inefficient if study groups are not comparable or balanced prior to analysis. Two matching techniques used to achieve balanced groups are Propensity Score Matching (PSM) and Coarsened Exact Matching (CEM). Unlike PSM, CEM has been shown to yield estimates of causal (program) effects that are lowest in variance and bias for any given sample size. The objective of this case study was to provide a comprehensive comparison of these 2 matching methods within an evaluation of a CCM program administered to a large health plan during a 2-year time period. Descriptive and statistical methods were used to assess the level of balance between comparison and treatment members pre matching. Compared with PSM, CEM retained more members, achieved better balance between matched members, and resulted in a statistically insignificant Wald test statistic for group aggregation. In terms of program performance, the results showed an overall higher medical cost savings among treatment members matched using CEM compared with those matched using PSM (-$25.57 versus -$19.78, respectively). Collectively, the results suggest CEM is a viable alternative, if not the most appropriate matching method, to apply when evaluating CCM program performance.

  6. Comparison of Dam Breach Parameter Estimators

    DTIC Science & Technology

    2008-01-01

    of the methods, when used in the HEC - RAS simulation model , produced comparable results. The methods tested suggest use of ...characteristics of a dam breach, use of those parameters within the unsteady flow routing model HEC - RAS , and the computation and display of the resulting...implementation of these breach parameters in

  7. Interactive Visual Least Absolutes Method: Comparison with the Least Squares and the Median Methods

    ERIC Educational Resources Information Center

    Kim, Myung-Hoon; Kim, Michelle S.

    2016-01-01

    A visual regression analysis using the least absolutes method (LAB) was developed, utilizing an interactive approach of visually minimizing the sum of the absolute deviations (SAB) using a bar graph in Excel; the results agree very well with those obtained from nonvisual LAB using a numerical Solver in Excel. These LAB results were compared with…

  8. Measuring solids concentration in stormwater runoff: comparison of analytical methods.

    PubMed

    Clark, Shirley E; Siu, Christina Y S

    2008-01-15

    Stormwater suspended solids typically are quantified using one of two methods: aliquot/subsample analysis (total suspended solids [TSS]) or whole-sample analysis (suspended solids concentration [SSC]). Interproject comparisons are difficult because of inconsistencies in the methods and in their application. To address this concern, the suspended solids content has been measured using both methodologies in many current projects, but the question remains about how to compare these values with historical water-quality data where the analytical methodology is unknown. This research was undertaken to determine the effect of analytical methodology on the relationship between these two methods of determination of the suspended solids concentration, including the effect of aliquot selection/collection method and of particle size distribution (PSD). The results showed that SSC was best able to represent the known sample concentration and that the results were independent of the sample's PSD. Correlations between the results and the known sample concentration could be established for TSS samples, but they were highly dependent on the sample's PSD and on the aliquot collection technique. These results emphasize the need to report not only the analytical method but also the particle size information on the solids in stormwater runoff.

  9. A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.

    PubMed

    Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun

    2017-09-21

    The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.

  10. Comparison of phase velocities from array measurements of Rayleigh waves associated with microtremor and results calculated from borehole shear-wave velocity profiles

    USGS Publications Warehouse

    Liu, Hsi-Ping; Boore, David M.; Joyner, William B.; Oppenheimer, David H.; Warrick, Richard E.; Zhang, Wenbo; Hamilton, John C.; Brown, Leo T.

    2000-01-01

    Shear-wave velocities (VS) are widely used for earthquake ground-motion site characterization. VS data are now largely obtained using borehole methods. Drilling holes, however, is expensive. Nonintrusive surface methods are inexpensive for obtaining VS information, but not many comparisons with direct borehole measurements have been published. Because different assumptions are used in data interpretation of each surface method and public safety is involved in site characterization for engineering structures, it is important to validate the surface methods by additional comparisons with borehole measurements. We compare results obtained from a particular surface method (array measurement of surface waves associated with microtremor) with results obtained from borehole methods. Using a 10-element nested-triangular array of 100-m aperture, we measured surface-wave phase velocities at two California sites, Garner Valley near Hemet and Hollister Municipal Airport. The Garner Valley site is located at an ancient lake bed where water-saturated sediment overlies decomposed granite on top of granite bedrock. Our array was deployed at a location where seismic velocities had been determined to a depth of 500 m by borehole methods. At Hollister, where the near-surface sediment consists of clay, sand, and gravel, we determined phase velocities using an array located close to a 60-m deep borehole where downhole velocity logs already exist. Because we want to assess the measurements uncomplicated by uncertainties introduced by the inversion process, we compare our phase-velocity results with the borehole VS depth profile by calculating fundamental-mode Rayleigh-wave phase velocities from an earth model constructed from the borehole data. For wavelengths less than ~2 times of the array aperture at Garner Valley, phase-velocity results from array measurements agree with the calculated Rayleigh-wave velocities to better than 11%. Measurement errors become larger for wavelengths 2 times greater than the array aperture. At Hollister, the measured phase velocity at 3.9 Hz (near the upper edge of the microtremor frequency band) is within 20% of the calculated Rayleigh-wave velocity. Because shear-wave velocity is the predominant factor controlling Rayleigh-wave phase velocities, the comparisons suggest that this nonintrusive method can provide VS information adequate for ground-motion estimation.

  11. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  12. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

  13. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods.

    PubMed

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.

  14. Performance comparison of ISAR imaging method based on time frequency transforms

    NASA Astrophysics Data System (ADS)

    Xie, Chunjian; Guo, Chenjiang; Xu, Jiadong

    2013-03-01

    Inverse synthetic aperture radar (ISAR) can image the moving target, especially the target in the air, so it is important in the air defence and missile defence system. Time-frequency Transform was applied to ISAR imaging process widely. Several time frequency transforms were introduced. Noise jamming methods were analysed, and when these noise jamming were added to the echo of the ISAR receiver, the image can become blur even can't to be identify. But the effect is different to the different time frequency analysis. The results of simulation experiment show the Performance Comparison of the method.

  15. A comparison of predictions obtained from wind tunnel tests and the results from cruising flight: Airbus and Concorde. [conferences

    NASA Technical Reports Server (NTRS)

    Berger, J.

    1979-01-01

    Following a summary of the methods used to establish aerodynamic data and propulsion data, a comparison was made in the form of the drag (or thrust) difference between flight results and predictions made on the basis of these data. Certain hypothesis and improvements on aerodynamic data were presented in order to explain the slight deficit found on Airbus and Concorde.

  16. Comparison of vibrational conductivity and radiative energy transfer methods

    NASA Astrophysics Data System (ADS)

    Le Bot, A.

    2005-05-01

    This paper is concerned with the comparison of two methods well suited for the prediction of the wideband response of built-up structures subjected to high-frequency vibrational excitation. The first method is sometimes called the vibrational conductivity method and the second one is rather known as the radiosity method in the field of acoustics, or the radiative energy transfer method. Both are based on quite similar physical assumptions i.e. uncorrelated sources, mean response and high-frequency excitation. Both are based on analogies with some equations encountered in the field of heat transfer. However these models do not lead to similar results. This paper compares the two methods. Some numerical simulations on a pair of plates joined along one edge are provided to illustrate the discussion.

  17. Testing the cosmic anisotropy with supernovae data: Hemisphere comparison and dipole fitting

    NASA Astrophysics Data System (ADS)

    Deng, Hua-Kai; Wei, Hao

    2018-06-01

    The cosmological principle is one of the cornerstones in modern cosmology. It assumes that the universe is homogeneous and isotropic on cosmic scales. Both the homogeneity and the isotropy of the universe should be tested carefully. In the present work, we are interested in probing the possible preferred direction in the distribution of type Ia supernovae (SNIa). To our best knowledge, two main methods have been used in almost all of the relevant works in the literature, namely the hemisphere comparison (HC) method and the dipole fitting (DF) method. However, the results from these two methods are not always approximately coincident with each other. In this work, we test the cosmic anisotropy by using these two methods with the joint light-curve analysis (JLA) and simulated SNIa data sets. In many cases, both methods work well, and their results are consistent with each other. However, in the cases with two (or even more) preferred directions, the DF method fails while the HC method still works well. This might shed new light on our understanding of these two methods.

  18. A comparison of treatment effectiveness between the CAD/CAM method and the manual method for managing adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Lo, K H

    2005-04-01

    The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.

  19. Comparison of urine analysis using manual and sedimentation methods.

    PubMed

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  20. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  1. Recent Results with Transatlantic GeTT Campaign

    DTIC Science & Technology

    1999-12-01

    which are driven by H-masers. Frequent comparisons between GPS CP and TWSTFT throughout the campaign allow a comparison of the long-term stability of...the two entirely independent techniques. Small discrepancies between the time transfer by GPS CP and the time transfer by TWSTFT have been observed...density for the GeTT values in comparison to the other time-transfer methods: two-way satellite time and frequency transfer ( TWSTFT ) and Circular T

  2. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  3. Comparison of some effects of modification of a polylactide surface layer by chemical, plasma, and laser methods

    NASA Astrophysics Data System (ADS)

    Moraczewski, Krzysztof; Rytlewski, Piotr; Malinowski, Rafał; Żenkiewicz, Marian

    2015-08-01

    The article presents the results of studies and comparison of selected properties of the modified PLA surface layer. The modification was carried out with three methods. In the chemical method, a 0.25 M solution of sodium hydroxide in water and ethanol was utilized. In the plasma method, a 50 W generator was used, which produced plasma in the air atmosphere under reduced pressure. In the laser method, a pulsed ArF excimer laser with fluency of 60 mJ/cm2 was applied. Polylactide samples were examined by using the following techniques: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and X-ray photoelectron spectroscopy (XPS). Images of surfaces of the modified samples were recorded, contact angles were measured, and surface free energy was calculated. Qualitative and quantitative analyses of chemical composition of the PLA surface layer were performed as well. Based on the survey it was found that the best modification results are obtained using the plasma method.

  4. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic

    USGS Publications Warehouse

    Chavez, P.S.; Sides, S.C.; Anderson, J.A.

    1991-01-01

    The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors

  5. Whole brain fiber-based comparison (FBC)-A tool for diffusion tensor imaging-based cohort studies.

    PubMed

    Zimmerman-Moreno, Gali; Ben Bashat, Dafna; Artzi, Moran; Nefussy, Beatrice; Drory, Vivian; Aizenstein, Orna; Greenspan, Hayit

    2016-02-01

    We present a novel method for fiber-based comparison of diffusion tensor imaging (DTI) scans of groups of subjects. The method entails initial preprocessing and fiber reconstruction by tractography of each brain in its native coordinate system. Several diffusion parameters are sampled along each fiber and used in subsequent comparisons. A spatial correspondence between subjects is established based on geometric similarity between fibers in a template set (several choices for template are explored), and fibers in all other subjects. Diffusion parameters between groups are compared statistically for each template fiber. Results are presented at single fiber resolution. As an initial exploratory step in neurological population studies this method points to the locations affected by the pathology of interest, without requiring a hypothesis. It does not make any grouping assumptions on the fibers and no manual intervention is needed. The framework was applied here to 18 healthy subjects and 23 amyotrophic lateral sclerosis (ALS) patients. The results are compatible with previous findings and with the tract based spatial statistics (TBSS) method. Hum Brain Mapp 37:477-490, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  6. A closed-form trim solution yielding minimum trim drag for airplanes with multiple longitudinal-control effectors

    NASA Technical Reports Server (NTRS)

    Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.

    1989-01-01

    Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.

  7. Statistical and Spatial Analysis of Bathymetric Data for the St. Clair River, 1971-2007

    USGS Publications Warehouse

    Bennion, David

    2009-01-01

    To address questions concerning ongoing geomorphic processes in the St. Clair River, selected bathymetric datasets spanning 36 years were analyzed. Comparisons of recent high-resolution datasets covering the upper river indicate a highly variable, active environment. Although statistical and spatial comparisons of the datasets show that some changes to the channel size and shape have taken place during the study period, uncertainty associated with various survey methods and interpolation processes limit the statistically certain results. The methods used to spatially compare the datasets are sensitive to small variations in position and depth that are within the range of uncertainty associated with the datasets. Characteristics of the data, such as the density of measured points and the range of values surveyed, can also influence the results of spatial comparison. With due consideration of these limitations, apparently active and ongoing areas of elevation change in the river are mapped and discussed.

  8. Comparing Performances of Multiple Comparison Methods in Commonly Used 2 × C Contingency Tables.

    PubMed

    Cangur, Sengul; Ankarali, Handan; Pasin, Ozge

    2016-12-01

    This study aims at mentioning briefly multiple comparison methods such as Bonferroni, Holm-Bonferroni, Hochberg, Hommel, Marascuilo, Tukey, Benjamini-Hochberg and Gavrilov-Benjamini-Sarkar for contingency tables, through the data obtained from a medical research and examining their performances by simulation study which was constructed as the total 36 scenarios to 2 × 4 contingency table. As results of simulation, it was observed that when the sample size is more than 100, the methods which can preserve the nominal alpha level are Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Marascuilo method was found to be a more conservative than Bonferroni. It was found that Type I error rate for Hommel method is around 2 % in all scenarios. Moreover, when the proportions of the three populations are equal and the proportion value of the fourth population is far at a level of ±3 standard deviation from the other populations, the power value for Unadjusted All-Pairwise Comparison approach is at least a bit higher than the ones obtained by Gavrilov-Benjamini-Sarkar, Holm-Bonferroni and Bonferroni. Consequently, Gavrilov-Benjamini-Sarkar and Holm-Bonferroni methods have the best performance according to simulation. Hommel and Marascuilo methods are not recommended to be used because they have medium or lower performance. In addition, we have written a Minitab macro about multiple comparisons for use in scientific research.

  9. Comparison of two target classification techniques

    NASA Astrophysics Data System (ADS)

    Chen, J. S.; Walton, E. K.

    1986-01-01

    Radar target classification techniques based on backscatter measurements in the resonance region (1.0-20.0 MHz) are discussed. Attention is given to two novel methods currently being tested at the radar range of Ohio State University. The methods include: (1) the nearest neighbor (NN) algorithm for determining the radar cross section (RCS) magnitude and range corrected phase at various operating frequencies; and (2) an inverse Fourier transformation of the complex multifrequency radar returns of the time domain, followed by cross correlation analysis. Comparisons are made of the performance of the two techniques as a function of signal-to-error noise ratio for different types of processing. The results of the comparison are discussed in detail.

  10. MkMRCC, APUCC, APUBD calculations of didehydronated species: comparison among calculated through-bond effective exchange integrals for diradicals

    NASA Astrophysics Data System (ADS)

    Saito, Toru; Nishihara, Satomichi; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi

    2010-10-01

    Mukherjee's type of multireference coupled-cluster (MkMRCC), approximate spin-projected spin-unrestricted CC (APUCC), and AP spin-unrestricted Brueckner's (APUBD) methods were applied to didehydronated ethylene, allyl cation, cis-butadiene, and naphthalene. The focus is on descriptions of magnetic properties for these diradical species such as S-T gaps and diradical characters. Several types of orbital sets were examined as reference orbitals for MkMRCC calculations, and it was found that the change of orbital sets do not give significant impacts on computational results for these species. Comparison of MkMRCC results with APUCC and APUBD results show that these two types of methods yield similar results. These results show that the quantum spin corrected UCC and UBD methods can effectively account for both nondynamical and dynamical correlation effects that are covered by the MkMRCC methods. It was also shown that appropriately parameterized hybrid density functional theory (DFT) with AP corrections (APUDFT) calculations yielded very accurate data that qualitatively agree with those of MRCC and APUBD methods. This hierarchy of methods, MRCC, APUCC, and APUDFT, is expected to constitute a series of standard ab initio approaches towards radical systems, among which we could choose one of them, depending on the size of the systems and the required accuracy.

  11. SUPPLEMENTARY COMPARISON: Final report on APMP.PR-S1.1: Bilateral comparison of irradiance responsivity of UVA detectors

    NASA Astrophysics Data System (ADS)

    Huang, Xuebo

    2009-01-01

    In order to assess the performance of the standards and techniques used for calibration and measurement of UVA irradiance responsivity of photodetectors in NMISA, South Africa, a new comparison was decided as a follow-up to comparison APMP.PR-S1. It is registered in the Key Comparison Data Base (KCDB) of BIPM as a bilateral supplementary comparison, with the identifier APMP.PR-S1.1. The comparison was carried out following the same technical protocol as that of supplementary comparison APMP PR-S1. The principle, organization and method of the comparison, as well as the preliminary measurements at the pilot laboratory NMC-A*STAR Singapore, were described in the Final Report of the APMP.PR-S1 comparison. The results of this bilateral comparison show that the NMISA's results lie within ±2% against the comparison reference values of APMP.PR-S1, which is a great improvement. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the APMP, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  12. Techniques for the characterization of sub-10-fs optical pulses: a comparison

    NASA Astrophysics Data System (ADS)

    Gallmann, L.; Sutter, D. H.; Matuschek, N.; Steinmeyer, G.; Keller, U.

    Several methods have been proposed for the phase and amplitude characterization of sub-10-fs pulses with nJ energies. An overview of these techniques is presented, with a focus on the comparison of second-harmonic generation frequency-resolved optical gating (SHG-FROG) and spectral phase interferometry for direct electric-field reconstruction (SPIDER). We describe a collinear FROG variant based on type-II phase-matching that completely avoids the geometrical blurring artifact and use both this and SPIDER for the characterization of sub-10-fs Ti:sapphire laser pulses. The results of both methods are compared in an extensive statistical analysis. From this first direct experimental comparison of FROG and SPIDER, guidelines for accurate measurements of sub-10-fs pulses are derived. We point out limitations of both methods for pulses in this ultrashort pulse regime.

  13. Evolutionary variational-hemivariational inequalities

    NASA Astrophysics Data System (ADS)

    Carl, Siegfried; Le, Vy K.; Motreanu, Dumitru

    2008-09-01

    We consider an evolutionary quasilinear hemivariational inequality under constraints represented by some closed and convex subset. Our main goal is to systematically develop the method of sub-supersolution on the basis of which we then prove existence, comparison, compactness and extremality results. The obtained results are applied to a general obstacle problem. We improve the corresponding results in the recent monograph [S. Carl, V.K. Le, DE Motreanu, Nonsmooth Variational Problems and Their Inequalities. Comparison Principles and Applications, Springer Monogr. Math., Springer, New York, 2007].

  14. MOCCA code for star cluster simulation: comparison with optical observations using COCOA

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2016-02-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  15. Comparison of different bonding techniques for efficient strain transfer using piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian

    2017-04-01

    In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the "soft" SU8 bonding in comparison to the "hard" bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers.

  16. Comparison of different bonding techniques for efficient strain transfer using piezoelectric actuators

    PubMed Central

    Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian

    2017-01-01

    In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the “soft” SU8 bonding in comparison to the “hard” bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers. PMID:28522879

  17. Comparison of different bonding techniques for efficient strain transfer using piezoelectric actuators.

    PubMed

    Ziss, Dorian; Martín-Sánchez, Javier; Lettner, Thomas; Halilovic, Alma; Trevisi, Giovanna; Trotta, Rinaldo; Rastelli, Armando; Stangl, Julian

    2017-04-01

    In this paper, strain transfer efficiencies from a single crystalline piezoelectric lead magnesium niobate-lead titanate substrate to a GaAs semiconductor membrane bonded on top are investigated using state-of-the-art x-ray diffraction (XRD) techniques and finite-element-method (FEM) simulations. Two different bonding techniques are studied, namely, gold-thermo-compression and polymer-based SU8 bonding. Our results show a much higher strain-transfer for the "soft" SU8 bonding in comparison to the "hard" bonding via gold-thermo-compression. A comparison between the XRD results and FEM simulations allows us to explain this unexpected result with the presence of complex interface structures between the different layers.

  18. Comparison of temporal and spectral scattering methods using acoustically large breast models derived from magnetic resonance images.

    PubMed

    Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C

    2014-08-01

    Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.

  19. Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying

    2011-01-01

    Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706

  20. INTERLABORATORY COMPARISON OF MASS SPECTROMETRIC METHODS FOR LEAD ISOTOPES AND TRACE ELEMENTS IN NIST SRM 1400 BONE ASH

    EPA Science Inventory

    The results of an interlaboratory comparison are reported for he lead isotope composition and for trace element concentrations in NIST SRM 1400 Bone Ash obtained using quadrupole and magnetic-sector inductively coupled plasma mass spectrometry (ICP-MS) and (for the Pb isotopes on...

  1. Unbiased Causal Inference from an Observational Study: Results of a Within-Study Comparison

    ERIC Educational Resources Information Center

    Pohl, Steffi; Steiner, Peter M.; Eisermann, Jens; Soellner, Renate; Cook, Thomas D.

    2009-01-01

    Adjustment methods such as propensity scores and analysis of covariance are often used for estimating treatment effects in nonexperimental data. Shadish, Clark, and Steiner used a within-study comparison to test how well these adjustments work in practice. They randomly assigned participating students to a randomized or nonrandomized experiment.…

  2. [Comparison of analgesic effect between locally vinegar-processed preparation of fresh rhizoma Corydalis and traditionally vinegar-processed rhizoma Corydalis].

    PubMed

    Liu, L; Li, G; Zhu, F; Wang, L; Wang, Y

    1990-11-01

    Hot plate and writhing methods were used in the comparison of the analgesic effect of vinegar-processed fresh tuber corydalis and the traditionally vinegar-processed Rhizoma Corydalis. The result shows that the effect of the former is stronger than that of the latter.

  3. Comparison of Columnar Water Vapor Measurements During The Fall 1997 ARM Intensive Observation Period: Optical Methods

    NASA Technical Reports Server (NTRS)

    Schmid, Beat; Michalsky, J.; Slater, D.; Barnard, J.; Halthore, R.; Liljegren, J.; Holben, B.; Eck, T.; Livingston, J.; Russell, P.; hide

    2000-01-01

    In the fall of 1997 the Atmospheric Radiation Measurement (ARM program conducted an intensive Observation Period (IOP) to study water vapor at its Southern Great Plains (SGP) site. Among the large number of instruments, four sun-tracking radiometers were present to measure the columnar water vapor (CWV). All four solar radiometers retrieve CWV by measuring solar transmittance in the 0.94-micrometer water vapor absorption band. As one of the steps in the CWV retrievals the aerosol component is subtracted from the total transmittance, in the 0.94-micrometer band. The aerosol optical depth comparisons among the same four radiometers are presented elsewhere. We have used three different methods to retrieve CWV. Without attempting to standardize on the same radiative transfer model and its underlying water vapor spectroscopy we found the CWV to agree within 0.13 cm (rms) for CWV values ranging from 1 to 5 cm. Preliminary results obtained when using the same updated radiative transfer model with updated spectroscopy for all instruments will also be shown. Comparisons to the microwave radiometer results will be included in the comparisons.

  4. Binding SNOMED CT terms to archetype elements. Establishing a baseline of results.

    PubMed

    Berges, I; Bermudez, J; Illarramendi, A

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". The proliferation of archetypes as a means to represent information of Electronic Health Records has raised the need of binding terminological codes - such as SNOMED CT codes - to their elements, in order to identify them univocally. However, the large size of the terminologies makes it difficult to perform this task manually. To establish a baseline of results for the aforementioned problem by using off-the-shelf string comparison-based techniques against which results from more complex techniques could be evaluated. Nine Typed Comparison METHODS were evaluated for binding using a set of 487 archetype elements. Their recall was calculated and Friedman and Nemenyi tests were applied in order to assess whether any of the methods outperformed the others. Using the qGrams method along with the 'Text' information piece of archetype elements outperforms the other methods if a level of confidence of 90% is considered. A recall of 25.26% is obtained if just one SNOMED CT term is retrieved for each archetype element. This recall rises to 50.51% and 75.56% if 10 and 100 elements are retrieved respectively, that being a reduction of more than 99.99% on the SNOMED CT code set. The baseline has been established following the above-mentioned results. Moreover, it has been observed that although string comparison-based methods do not outperform more sophisticated techniques, they still can be an alternative for providing a reduced set of candidate terms for each archetype element from which the ultimate term can be chosen later in the more-than-likely manual supervision task.

  5. Comparison of calculation and simulation of evacuation in real buildings

    NASA Astrophysics Data System (ADS)

    Szénay, Martin; Lopušniak, Martin

    2018-03-01

    Each building must meet requirements for safe evacuation in order to prevent casualties. Therefore methods for evaluation of evacuation are used when designing buildings. In the paper, calculation methods were tested on three real buildings. The testing used methods of evacuation time calculation pursuant to Slovak standards and evacuation time calculation using the buildingExodus simulation software. If calculation methods have been suitably selected taking into account the nature of evacuation and at the same time if correct values of parameters were entered, we will be able to obtain almost identical times of evacuation in comparison with real results obtained from simulation. The difference can range from 1% to 27%.

  6. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    NASA Astrophysics Data System (ADS)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  7. Inner structure of the Puy de Dôme volcano: cross-comparison of geophysical models (ERT, gravimetry, muon imaging)

    NASA Astrophysics Data System (ADS)

    Portal, A.; Labazuy, P.; Lénat, J.-F.; Béné, S.; Boivin, P.; Busato, E.; Cârloganu, C.; Combaret, C.; Dupieux, P.; Fehr, F.; Gay, P.; Laktineh, I.; Miallier, D.; Mirabito, L.; Niess, V.; Vulpescu, B.

    2013-01-01

    Muon imaging of volcanoes and of geological structures in general is actively being developed by several groups in the world. It has the potential to provide 3-D density distributions with an accuracy of a few percent. At this stage of development, comparisons with established geophysical methods are useful to validate the method. An experiment has been carried out in 2011 and 2012 on a large trachytic dome, the Puy de Dôme volcano, to perform such a comparison of muon imaging with gravimetric tomography and 2-D electrical resistivity tomography. Here, we present the preliminary results for the last two methods. North-south and east-west resistivity profiles allow us to model the resistivity distribution down to the base of the dome. The modelling of the Bouguer anomaly provides models for the density distribution within the dome that are directly comparable with the results from the muon imaging. Our ultimate goal is to derive a model of the dome using the joint interpretation of all sets of data.

  8. Revealed and stated preference valuation and transfer: A within-sample comparison of water quality improvement values

    NASA Astrophysics Data System (ADS)

    Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian

    2014-06-01

    Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.

  9. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site-Working towards a toolbox for better assessment.

    PubMed

    Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel

    2017-01-01

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.

  10. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site—Working towards a toolbox for better assessment

    PubMed Central

    2017-01-01

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607

  11. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  12. Magnetic Field Suppression of Flow in Semiconductor Melt

    NASA Technical Reports Server (NTRS)

    Fedoseyev, A. I.; Kansa, E. J.; Marin, C.; Volz, M. P.; Ostrogorsky, A. G.

    2000-01-01

    One of the most promising approaches for the reduction of convection during the crystal growth of conductive melts (semiconductor crystals) is the application of magnetic fields. Current technology allows the experimentation with very intense static fields (up to 80 KGauss) for which nearly convection free results are expected from simple scaling analysis in stabilized systems (vertical Bridgman method with axial magnetic field). However, controversial experimental results were obtained. The computational methods are, therefore, a fundamental tool in the understanding of the phenomena accounting during the solidification of semiconductor materials. Moreover, effects like the bending of the isomagnetic lines, different aspect ratios and misalignments between the direction of the gravity and magnetic field vectors can not be analyzed with analytical methods. The earliest numerical results showed controversial conclusions and are not able to explain the experimental results. Although the generated flows are extremely low, the computational task is a complicated because of the thin boundary layers. That is one of the reasons for the discrepancy in the results that numerical studies reported. Modeling of these magnetically damped crystal growth experiments requires advanced numerical methods. We used, for comparison, three different approaches to obtain the solution of the problem of thermal convection flows: (1) Spectral method in spectral superelement implementation, (2) Finite element method with regularization for boundary layers, (3) Multiquadric method, a novel method with global radial basis functions, that is proven to have exponential convergence. The results obtained by these three methods are presented for a wide region of Rayleigh and Hartman numbers. Comparison and discussion of accuracy, efficiency, reliability and agreement with experimental results will be presented as well.

  13. Final report of the SIM.QM-S8 supplementary comparison, trace metals in drinking water

    NASA Astrophysics Data System (ADS)

    Yang, Lu; Nadeau, Kenny; Gedara Pihillagawa, Indu; Meija, Juris; Mester, Zoltan; Napoli, Romina; Pérez Zambra, Ramiro; Ferreira, Elizabeth; Alejandro Ahumada Forigua, Diego; Abella Gamba, Johanna Paola

    2018-01-01

    After completing a supplementary comparison SIM.QM-S7, the National Metrology Institute of Colombia (NMIC) requested to National Research Council of Canada (NRC) a subsequent bilateral comparison, because INMC considered that its results in SIM.QM-S7 unrepresentative of its standards. In this context, NRC agreed to coordinate this bilateral comparison with the aim of demonstrating the measurement capabilities of trace elements in fresh water. Participants included NMIC and LATU. No measurement method was prescribed by the coordinating laboratory. Therefore, NMIs used measurement methods of their choice. However, the majority of NMIs/DIs used ICP-MS. This SIM.QM-S8 Supplementary Comparison provides NMIs with the needed evidence for CMC claims for trace elements in fresh waters and similar matrices. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  14. Calculation of unsteady airfoil loads with and without flap deflection at -90 degrees incidence

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.

    1991-01-01

    A method has been developed for calculating the viscous flow about airfoils with and without deflected flaps at -90 deg incidence. This unique method provides for the direct solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body-fitted computational mesh incorporating a staggered grid method. The vorticity is determined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and for the conservation of mass at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the conservation of mass to machine zero at each time-step. The results of the present analysis and experimental results obtained for a XV-15 airfoil are compared. The comparisons indicate that the calculated drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results. Comparisons of the numerical results of the present method for several airfoils demonstrate the significant influence of airfoil curvature and flap deflection on the predicted download.

  15. Examining mixing methods in an evaluation of a smoking cessation program.

    PubMed

    Betzner, Anne; Lawrenz, Frances P; Thao, Mao

    2016-02-01

    Three different methods were used in an evaluation of a smoking cessation study: surveys, focus groups, and phenomenological interviews. The results of each method were analyzed separately and then combined using both a pragmatic and dialectic stance to examine the effects of different approaches to mixing methods. Results show that the further apart the methods are philosophically, the more diverse the findings. Comparisons of decision maker opinions and costs of the different methods are provided along with recommendations for evaluators' uses of different methods. Copyright © 2015. Published by Elsevier Ltd.

  16. Comparison of results of an obstacle resolving microscale model with wind tunnel data

    NASA Astrophysics Data System (ADS)

    Grawe, David; Schlünzen, K. Heinke; Pascheke, Frauke

    2013-11-01

    The microscale transport and stream model MITRAS has been improved and a new technique has been implemented to improve numerical stability for complex obstacle configurations. Results of the updated version have been compared with wind tunnel data using an evaluation method that has been established for simple obstacle configurations. MITRAS is a part of the M-SYS model system for the assessment of ambient air quality. A comparison of model results for the flow field against quality ensured wind tunnel data has been carried out for both idealised and realistic test cases. Results of the comparison show a very good agreement of the wind field for most test cases and identify areas of possible improvement of the model. The evaluated MITRAS results can be used as input data for the M-SYS microscale chemistry model MICTM. This paper describes how such a comparison can be carried out for simple as well as realistic obstacle configurations and what difficulties arise.

  17. Evaluating methods of correcting for multiple comparisons implemented in SPM12 in social neuroscience fMRI studies: an example from moral psychology.

    PubMed

    Han, Hyemin; Glenn, Andrea L

    2018-06-01

    In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.

  18. First Interlaboratory Comparison on Calibration of Temperature-Controlled Enclosures in Turkey

    NASA Astrophysics Data System (ADS)

    Uytun, A.; Kalemci, M.

    2017-11-01

    The number of accredited laboratories in the field of calibration of temperature-controlled enclosures has been increasing in Turkey. One of the main criteria demonstrating the competence of a calibration laboratory is successful participation in interlaboratory comparisons. Therefore, TUBITAK UME Temperature Laboratory organized the first interlaboratory comparison on "Calibration of Temperature-Controlled Enclosures" in Turkey as a pilot laboratory between January and November, 2013. Forty accredited laboratories which provide routine calibration services to the industry in this field participated in the comparison. The standards used during the comparison was a climatic chamber for the measurements at -40 {°}C, -20 {°}C, 40 {°}C and 100 {°}C and an oven for the measurements at 200 {°}C. The protocol of the comparison was prepared considering guide EURAMET cg-20 and BS EN/IEC standards 600068-3-5 and 600068-3-11. During the comparison measurements, each participant had the liberty to choose the most convenient calibration points in terms of their accreditation scope among the values mentioned above and carried out on-site measurements at UME. The details and the results of this comparison are given in the paper. Determination of the statistical consistency of the results with the uncertainties given by the participants can be assessed by the method of En value assessment for each laboratory. En values for all measurement results based on the results of pilot and participating laboratories were calculated.

  19. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    PubMed

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  20. A Comparison of Two-Group Classification Methods

    ERIC Educational Resources Information Center

    Holden, Jocelyn E.; Finch, W. Holmes; Kelley, Ken

    2011-01-01

    The statistical classification of "N" individuals into "G" mutually exclusive groups when the actual group membership is unknown is common in the social and behavioral sciences. The results of such classification methods often have important consequences. Among the most common methods of statistical classification are linear discriminant analysis,…

  1. Evaluation of the clinical sensitivity for the quantification of human immunodeficiency virus type 1 RNA in plasma: Comparison of the new COBAS TaqMan HIV-1 with three current HIV-RNA assays--LCx HIV RNA quantitative, VERSANT HIV-1 RNA 3.0 (bDNA) and COBAS AMPLICOR HIV-1 Monitor v1.5.

    PubMed

    Katsoulidou, Antigoni; Petrodaskalaki, Maria; Sypsa, Vana; Papachristou, Eleni; Anastassopoulou, Cleo G; Gargalianos, Panagiotis; Karafoulidou, Anastasia; Lazanas, Marios; Kordossis, Theodoros; Andoniadou, Anastasia; Hatzakis, Angelos

    2006-02-01

    The COBAS TaqMan HIV-1 test (Roche Diagnostics) was compared with the LCx HIV RNA quantitative assay (Abbott Laboratories), the Versant HIV-1 RNA 3.0 (bDNA) assay (Bayer) and the COBAS Amplicor HIV-1 Monitor v1.5 test (Roche Diagnostics), using plasma samples of various viral load levels from HIV-1-infected individuals. In the comparison of TaqMan with LCx, TaqMan identified as positive 77.5% of the 240 samples versus 72.1% identified by LCx assay, while their overall agreement was 94.6% and the quantitative results of samples that were positive by both methods were strongly correlated (r=0.91). Similarly, in the comparison of TaqMan with bDNA 3.0, both methods identified 76.3% of the 177 samples as positive, while their overall agreement was 95.5% and the quantitative results of samples that were positive by both methods were strongly correlated (r=0.95). Finally, in the comparison of TaqMan with Monitor v1.5, TaqMan identified 79.5% of the 156 samples as positive versus 80.1% identified by Monitor v1.5, while their overall agreement was 95.5% and the quantitative results of samples that were positive by both methods were strongly correlated (r=0.96). In conclusion, the new COBAS TaqMan HIV-1 test showed excellent agreement with other widely used commercially available tests for the quantitation of HIV-1 viral load.

  2. Alternative methods to determine headwater benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Y.S.; Perlack, R.D.; Sale, M.J.

    1997-11-10

    In 1992, the Federal Energy Regulatory Commission (FERC) began using a Flow Duration Analysis (FDA) methodology to assess headwater benefits in river basins where use of the Headwater Benefits Energy Gains (HWBEG) model may not result in significant improvements in modeling accuracy. The purpose of this study is to validate the accuracy and appropriateness of the FDA method for determining energy gains in less complex basins. This report presents the results of Oak Ridge National Laboratory`s (ORNL`s) validation of the FDA method. The validation is based on a comparison of energy gains using the FDA method with energy gains calculatedmore » using the MWBEG model. Comparisons of energy gains are made on a daily and monthly basis for a complex river basin (the Alabama River Basin) and a basin that is considered relatively simple hydrologically (the Stanislaus River Basin). In addition to validating the FDA method, ORNL was asked to suggest refinements and improvements to the FDA method. Refinements and improvements to the FDA method were carried out using the James River Basin as a test case.« less

  3. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  4. Fluorescence background removal method for biological Raman spectroscopy based on empirical mode decomposition.

    PubMed

    Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M

    2016-08-01

    Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.

  5. Results of the first provisional technical secretariat interlaboratory comparison test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuff, J.R.; Hoffland, L.

    1995-06-01

    The principal task of this laboratory in the first Provisional Technical Secretariat (PTS) Interlaboratory Comparison Test was to verify and test the extraction and preparation procedures outlined in the Recommended Operating Procedures for Sampling and Analysis in the Verification of Chemical Disarmament in addition to our laboratory extraction methods and our laboratory analysis methods. Sample preparation began on 16 May 1994 and analysis was completed on 12 June 1994. The analytical methods used included NMR ({sup 1}H and {sup 31}P) GC/AED, GC/MS (EI and methane CI), GC/IRD, HPLC/IC, HPLC/TSP/MS, MS/MS(Electrospray), and CZE.

  6. Evaluation of radiation loading on finite cylindrical shells using the fast Fourier transform: A comparison with direct numerical integration.

    PubMed

    Liu, S X; Zou, M S

    2018-03-01

    The radiation loading on a vibratory finite cylindrical shell is conventionally evaluated through the direct numerical integration (DNI) method. An alternative strategy via the fast Fourier transform algorithm is put forward in this work based on the general expression of radiation impedance. To check the feasibility and efficiency of the proposed method, a comparison with DNI is presented through numerical cases. The results obtained using the present method agree well with those calculated by DNI. More importantly, the proposed calculating strategy can significantly save the time cost compared with the conventional approach of straightforward numerical integration.

  7. A new comparison method for dew-point generators

    NASA Astrophysics Data System (ADS)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  8. Use of petroleum-based correlations and estimation methods for synthetic fuels

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.

    1980-01-01

    Correlations of hydrogen content with aromatics content, heat of combustion, and smoke point are derived for some synthetic fuels prepared from oil and coal syncrudes. Comparing the results of the aromatics content with correlations derived for petroleum fuels shows that the shale-derived fuels fit the petroleum-based correlations, but the coal-derived fuels do not. The correlations derived for heat of combustion and smoke point are comparable to some found for petroleum-based correlations. Calculated values of hydrogen content and of heat of combustion are obtained for the synthetic fuels by use of ASTM estimation methods. Comparisons of the measured and calculated values show biases in the equations that exceed the critical statistics values. Comparison of the measured hydrogen content by the standard ASTM combustion method with that by a nuclear magnetic resonance (NMR) method shows a decided bias. The comparison of the calculated and measured NMR hydrogen contents shows a difference similar to that found with petroleum fuels.

  9. Network meta-analysis combining individual patient and aggregate data from a mixture of study designs with an application to pulmonary arterial hypertension.

    PubMed

    Thom, Howard H Z; Capkun, Gorana; Cerulli, Annamaria; Nixon, Richard M; Howard, Luke S

    2015-04-12

    Network meta-analysis (NMA) is a methodology for indirectly comparing, and strengthening direct comparisons of two or more treatments for the management of disease by combining evidence from multiple studies. It is sometimes not possible to perform treatment comparisons as evidence networks restricted to randomized controlled trials (RCTs) may be disconnected. We propose a Bayesian NMA model that allows to include single-arm, before-and-after, observational studies to complete these disconnected networks. We illustrate the method with an indirect comparison of treatments for pulmonary arterial hypertension (PAH). Our method uses a random effects model for placebo improvements to include single-arm observational studies into a general NMA. Building on recent research for binary outcomes, we develop a covariate-adjusted continuous-outcome NMA model that combines individual patient data (IPD) and aggregate data from two-arm RCTs with the single-arm observational studies. We apply this model to a complex comparison of therapies for PAH combining IPD from a phase-III RCT of imatinib as add-on therapy for PAH and aggregate data from RCTs and single-arm observational studies, both identified by a systematic review. Through the inclusion of observational studies, our method allowed the comparison of imatinib as add-on therapy for PAH with other treatments. This comparison had not been previously possible due to the limited RCT evidence available. However, the credible intervals of our posterior estimates were wide so the overall results were inconclusive. The comparison should be treated as exploratory and should not be used to guide clinical practice. Our method for the inclusion of single-arm observational studies allows the performance of indirect comparisons that had previously not been possible due to incomplete networks composed solely of available RCTs. We also built on many recent innovations to enable researchers to use both aggregate data and IPD. This method could be used in similar situations where treatment comparisons have not been possible due to restrictions to RCT evidence and where a mixture of aggregate data and IPD are available.

  10. Comparison of the Modified-Hodge test, Carba NP test, and carbapenem inactivation method as screening methods for carbapenemase-producing Enterobacteriaceae.

    PubMed

    Yamada, Kageto; Kashiwa, Machiko; Arai, Katsumi; Nagano, Noriyuki; Saito, Ryoichi

    2016-09-01

    We compared three screening methods for carbapenemase-producing Enterobacteriaceae. While the Modified-Hodge test and Carba NP test produced false-negative results for OXA-48-like and mucoid NDM producers, the carbapenem inactivation method (CIM) showed positive results for these isolates. Although the CIM required cultivation time, it is well suited for general clinical laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Comparison of mixed-mode stress-intensity factors obtained through displacement correlation, J-integral formulation, and modified crack-closure integral

    NASA Astrophysics Data System (ADS)

    Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.

    This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.

  12. Integration of new biological and physical retrospective dosimetry methods into EU emergency response plans - joint RENEB and EURADOS inter-laboratory comparisons.

    PubMed

    Ainsbury, Elizabeth; Badie, Christophe; Barnard, Stephen; Manning, Grainne; Moquet, Jayne; Abend, Michael; Antunes, Ana Catarina; Barrios, Lleonard; Bassinet, Celine; Beinke, Christina; Bortolin, Emanuela; Bossin, Lily; Bricknell, Clare; Brzoska, Kamil; Buraczewska, Iwona; Castaño, Carlos Huertas; Čemusová, Zina; Christiansson, Maria; Cordero, Santiago Mateos; Cosler, Guillaume; Monaca, Sara Della; Desangles, François; Discher, Michael; Dominguez, Inmaculada; Doucha-Senf, Sven; Eakins, Jon; Fattibene, Paola; Filippi, Silvia; Frenzel, Monika; Georgieva, Dimka; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Hadjiiska, Ljubomira; Hristova, Rositsa; Karakosta, Maria; Kis, Enikő; Kriehuber, Ralf; Lee, Jungil; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Macaeva, Ellina; Majewski, Matthaeus; Vanda Martins, S; McKeever, Stephen W S; Meade, Aidan; Medipally, Dinesh; Meschini, Roberta; M'kacher, Radhia; Gil, Octávia Monteiro; Montero, Alegria; Moreno, Mercedes; Noditi, Mihaela; Oestreicher, Ursula; Oskamp, Dominik; Palitti, Fabrizio; Palma, Valentina; Pantelias, Gabriel; Pateux, Jerome; Patrono, Clarice; Pepe, Gaetano; Port, Matthias; Prieto, María Jesús; Quattrini, Maria Cristina; Quintens, Roel; Ricoul, Michelle; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Sholom, Sergey; Sommer, Sylwester; Staynova, Albena; Strunz, Sonja; Terzoudi, Georgia; Testa, Antonella; Trompier, Francois; Valente, Marco; Hoey, Olivier Van; Veronese, Ivan; Wojcik, Andrzej; Woda, Clemens

    2017-01-01

    RENEB, 'Realising the European Network of Biodosimetry and Physical Retrospective Dosimetry,' is a network for research and emergency response mutual assistance in biodosimetry within the EU. Within this extremely active network, a number of new dosimetry methods have recently been proposed or developed. There is a requirement to test and/or validate these candidate techniques and inter-comparison exercises are a well-established method for such validation. The authors present details of inter-comparisons of four such new methods: dicentric chromosome analysis including telomere and centromere staining; the gene expression assay carried out in whole blood; Raman spectroscopy on blood lymphocytes, and detection of radiation-induced thermoluminescent signals in glass screens taken from mobile phones. In general the results show good agreement between the laboratories and methods within the expected levels of uncertainty, and thus demonstrate that there is a lot of potential for each of the candidate techniques. Further work is required before the new methods can be included within the suite of reliable dosimetry methods for use by RENEB partners and others in routine and emergency response scenarios.

  13. Comparative analysis of methods for detecting interacting loci

    PubMed Central

    2011-01-01

    Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295

  14. Comparison of 3 in vivo methods for assessment of alcohol-based hand rubs.

    PubMed

    Edmonds-Wilson, Sarah; Campbell, Esther; Fox, Kyle; Macinga, David

    2015-05-01

    Alcohol-based hand rubs (ABHRs) are the primary method of hand hygiene in health-care settings. ICPs increasingly are assessing ABHR product efficacy data as improved products and test methods are developed. As a result, ICPs need better tools and recommendations for how to assess and compare ABHRs. Two ABHRs (70% ethanol) were tested according to 3 in vivo methods approved by ASTM International: E1174, E2755, and E2784. Log10 reductions were measured after a single test product use and after 10 consecutive uses at an application volume of 2 mL. The test method used had a significant influence on ABHR efficacy; however, in this study the test product (gel or foam) did not significantly influence efficacy. In addition, for all test methods, log10 reductions obtained after a single application were not predictive of results after 10 applications. Choice of test method can significantly influence efficacy results. Therefore, when assessing antimicrobial efficacy data of hand hygiene products, ICPs should pay close attention to the test method used, and ensure that product comparisons are made head to head in the same study using the same test methodology. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  15. Information-theoretic indices usage for the prediction and calculation of octanol-water partition coefficient.

    PubMed

    Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara

    2007-01-01

    The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.

  16. Study of comparison between Ultra-high Frequency (UHF) method and ultrasonic method on PD detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia

    2017-11-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.

  17. Synthesizing Results from Replication Studies Using Robust Variance Estimation: Corrections When the Number of Studies Is Small

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2014-01-01

    Replication studies allow for making comparisons and generalizations regarding the effectiveness of an intervention across different populations, versions of a treatment, settings and contexts, and outcomes. One method for making these comparisons across many replication studies is through the use of meta-analysis. A recent innovation in…

  18. The "Public Opinion Survey of Human Attributes-Stuttering" (POSHA-S): Summary Framework and Empirical Comparisons

    ERIC Educational Resources Information Center

    St. Louis, Kenneth O.

    2011-01-01

    Purpose: The "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was developed to make available worldwide a standard measure of public attitudes toward stuttering that is practical, reliable, valid, and translatable. Mean data from past field studies as comparisons for interpretation of "POSHA-S" results are reported. Method: Means…

  19. Valx: A system for extracting and structuring numeric lab test comparison statements from text

    PubMed Central

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2017-01-01

    Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748

  20. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  1. Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients.

    PubMed

    Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan

    2016-07-27

    The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. The study was conducted at the Be'sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable.

  2. Analysis and comparison of methods for the preparation of domestic hot water from district heating system, selected renewable and non-renewable sources in low-energy buildings

    NASA Astrophysics Data System (ADS)

    Knapik, Maciej

    2018-02-01

    The article presents an economic analysis and comparison of selected (district heating, natural gas, heat pump with renewable energy sources) methods for the preparation of domestic hot water in a building with low energy demand. In buildings of this type increased demand of energy for domestic hot water preparation in relation to the total energy demand can be observed. As a result, the proposed solutions allow to further lower energy demand by using the renewable energy sources. This article presents the results of numerical analysis and calculations performed mainly in MATLAB software, based on typical meteorological years. The results showed that system with heat pump and renewable energy sources Is comparable with district heating system.

  3. Incredible Years Program Tailored to Parents of Preschoolers with Autism: Pilot Results

    ERIC Educational Resources Information Center

    Dababnah, Sarah; Parish, Susan L.

    2016-01-01

    Objective: This article reports on the acceptability and results from an evaluation of an empirically supported practice, The Incredible Years, tailored to parents of children with autism spectrum disorder (ASD). Methods: Two groups of parents (N = 17) participated in a mixed methods test with no comparison group of the 15-week intervention. Data…

  4. Above-Water Reflectance for the Evaluation of Adjacency Effects in Earth Observation Data: Initial Results and Methods Comparison for Near-Coastal Waters in the Western Channel, UK

    NASA Astrophysics Data System (ADS)

    Martinez Vicente, V.; Simis, S. G. H.; Alegre, R.; Land, P. E.; Groom, S. B.

    2013-09-01

    Un-supervised hyperspectral remote-sensing reflectance data (<15 km from the shore) were collected from a moving research vessel. Twodifferent processing methods were compared. The results were similar to concurrent Aqua-MODIS and Suomi-NPP-VIIRS satellite data.

  5. Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.

    PubMed

    Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai

    2015-12-01

    The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Comparison of bulk sediment and sediment elutriate toxicity testing methods

    EPA Science Inventory

    Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...

  7. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  8. Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview.

    PubMed

    Skinner, Jeff; Kotliarov, Yuri; Varma, Sudhir; Mine, Karina L; Yambartsev, Anatoly; Simon, Richard; Huyen, Yentram; Morgun, Andrey

    2011-07-14

    DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.

  9. Time comparison via OTS-2

    NASA Technical Reports Server (NTRS)

    Dejong, G.; Kaarls, R.; Kirchner, D.; Ressler, H.

    1982-01-01

    The time comparisons carried out via OTS-2 between the Technical University Graz (Austria) and the Van Swinden Laboratory Delft (Netherlands) are discussed. The method is based on the use of the synchronization pulse in the TV-frame of the daily evening broadcasting of a French TV-program to Northern Africa. Corrections, as a consequence of changes in the position of the satellite coordinates are applied weekly after reception of satellite coordinates. A description of the method is given as well as some of the particular techniques used in both the participating laboratories. Preliminary results are presented.

  10. A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing

    NASA Astrophysics Data System (ADS)

    Chernogorova, Tatiana; Vulkov, Lubin

    2009-11-01

    This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.

  11. Final report on COOMET.AUV.A-S1: Technical report on supplementary comparison 'Comparison of national standards of the sound pressure unit in air through calibration of working reference microphones'

    NASA Astrophysics Data System (ADS)

    Pozdeeva, Valentina; Chalyy, Vladimir

    2014-01-01

    The supplementary comparison COOMET.AUV.A-S1 for secondary calibration methods using WS1 and WS2 measurement microphones was carried out from 2009 to 2010. The results were submitted to and approved by CCAUV in April 2014. Four National Metrology Institutes took part in this comparison and are as follows: BelGIM (Belarus), VNIIFTRI (Russia), SMU (Slovakia) and DP NDI 'Sistema' (Ukraine). Three of the above NMIs (VNIIFTRI, SMU and DP NDI 'Sistema') had earlier participated in COOMET key comparisons and one NMI (VNIIFTRI) had also participated in CCAUV key comparisons. The Comparison Reference Values were calculated as the weighted mean values from results obtained by three institutes. The comparison results show agreement for all participants in the frequency range from 20 Hz to 12.5 kHz for WS1 microphones, and in the frequency range from 20 Hz to 16 kHz for WS2 microphones. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCAUV, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  12. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  13. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    PubMed

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  14. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    PubMed Central

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  15. Radioanalytical Chemistry for Automated Nuclear Waste Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devol, Timothy A.

    2005-06-01

    Comparison of different pulse shape discrimination methods was performed under two different experimental conditions and the best method was identified. Beta/gamma discrimination of 90Sr/90Y and 137Cs was performed using a phoswich detector made of BC400 (2.5 cm OD x 1.2 cm) and BGO (2.5 cm O.D. x 2.5 cm ) scintillators. Alpha/gamma discrimination of 210Po and 137Cs was performed using a CsI:Tl (2.8 x 1.4 x 1.4 cm3) scintillation crystal. The pulse waveforms were digitized with a DGF-4c (X-Ray Instrumentation Associates) and analyzed offline with IGOR Pro software (Wavemetrics, Inc.). The four pulse shape discrimination methods that were compared include:more » rise time discrimination, digital constant fraction discrimination, charge ratio, and constant time discrimination (CTD) methods. The CTD method is the ratio of the pulse height at a particular time after the beginning of the pulse to the time at the maximum pulse height. The charge comparison method resulted in a Figure of Merit (FoM) of 3.3 (9.9 % spillover) and 3.7 (0.033 % spillover) for the phoswich and the CsI:Tl scintillator setups, respectively. The CTD method resulted in a FoM of 3.9 (9.2 % spillover) and 3.2 (0.25 % spillover), respectively. Inverting the pulse shape data typically resulted in a significantly higher FoM than conventional methods, but there was no reduction in % spillover values. This outcome illustrates that the FoM may not be a good scheme for the quantification of a system to perform pulse shape discrimination. Comparison of several pulse shape discrimination (PSD) methods was performed as a means to compare traditional analog and digital PSD methods on the same scintillation pulses. The X-ray Instrumentation Associates DGF-4C (40 Msps, 14-bit) was used to digitize waveforms from a CsI:Tl crystal and BC400/BGO phoswich detector.« less

  16. Methodological integrative review of the work sampling technique used in nursing workload research.

    PubMed

    Blay, Nicole; Duffield, Christine M; Gallagher, Robyn; Roche, Michael

    2014-11-01

    To critically review the work sampling technique used in nursing workload research. Work sampling is a technique frequently used by researchers and managers to explore and measure nursing activities. However, work sampling methods used are diverse making comparisons of results between studies difficult. Methodological integrative review. Four electronic databases were systematically searched for peer-reviewed articles published between 2002-2012. Manual scanning of reference lists and Rich Site Summary feeds from contemporary nursing journals were other sources of data. Articles published in the English language between 2002-2012 reporting on research which used work sampling to examine nursing workload. Eighteen articles were reviewed. The review identified that the work sampling technique lacks a standardized approach, which may have an impact on the sharing or comparison of results. Specific areas needing a shared understanding included the training of observers and subjects who self-report, standardization of the techniques used to assess observer inter-rater reliability, sampling methods and reporting of outcomes. Work sampling is a technique that can be used to explore the many facets of nursing work. Standardized reporting measures would enable greater comparison between studies and contribute to knowledge more effectively. Author suggestions for the reporting of results may act as guidelines for researchers considering work sampling as a research method. © 2014 John Wiley & Sons Ltd.

  17. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    PubMed

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  18. Comparison of epifluorescent viable bacterial count methods

    NASA Technical Reports Server (NTRS)

    Rodgers, E. B.; Huff, T. L.

    1992-01-01

    Two methods, the 2-(4-Iodophenyl) 3-(4-nitrophenyl) 5-phenyltetrazolium chloride (INT) method and the direct viable count (DVC), were tested and compared for their efficiency for the determination of the viability of bacterial populations. Use of the INT method results in the formation of a dark spot within each respiring cell. The DVC method results in elongation or swelling of growing cells that are rendered incapable of cell division. Although both methods are subjective and can result in false positive results, the DVC method is best suited to analysis of waters in which the number of different types of organisms present in the same sample is assumed to be small, such as processed waters. The advantages and disadvantages of each method are discussed.

  19. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  20. A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.

    1998-01-01

    The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.

  1. Comparison of methods of evaluating hearing benefit of middle ear surgery.

    PubMed

    Toner, J G; Smyth, G D

    1993-01-01

    The objective of this paper is to compare two methods of predicting the level of subjective patient benefit following reconstructive middle ear surgery. This should have always been an important consideration in advising patients regarding surgery, but assumes even more relevance in these days of clinical audit and cost benefit analysis. The two methods studied were the '15/30 dB rule of thumb' (Smyth and Patterson, 1985) and the 'Glasgow plot' (Browning et al., 1991). The predictions of benefit for each of the two methods were compared to the assessment of actual benefits by the patient post-operatively. The results of this comparison in 153 patients were analysed, the rule of thumb was found to be somewhat more sensitive in predicting patient benefit.

  2. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    PubMed

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  3. A global optimization algorithm for protein surface alignment

    PubMed Central

    2010-01-01

    Background A relevant problem in drug design is the comparison and recognition of protein binding sites. Binding sites recognition is generally based on geometry often combined with physico-chemical properties of the site since the conformation, size and chemical composition of the protein surface are all relevant for the interaction with a specific ligand. Several matching strategies have been designed for the recognition of protein-ligand binding sites and of protein-protein interfaces but the problem cannot be considered solved. Results In this paper we propose a new method for local structural alignment of protein surfaces based on continuous global optimization techniques. Given the three-dimensional structures of two proteins, the method finds the isometric transformation (rotation plus translation) that best superimposes active regions of two structures. We draw our inspiration from the well-known Iterative Closest Point (ICP) method for three-dimensional (3D) shapes registration. Our main contribution is in the adoption of a controlled random search as a more efficient global optimization approach along with a new dissimilarity measure. The reported computational experience and comparison show viability of the proposed approach. Conclusions Our method performs well to detect similarity in binding sites when this in fact exists. In the future we plan to do a more comprehensive evaluation of the method by considering large datasets of non-redundant proteins and applying a clustering technique to the results of all comparisons to classify binding sites. PMID:20920230

  4. APMP Scale Comparison with Three Radiation Thermometers and Six Fixed-Point Blackbodies

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Shimizu, Y.; Ishii, J.

    2015-08-01

    New Asia Pacific Metrology Programme (APMP) comparisons of radiation thermometry standards, APMP TS-11, and -12, have recently been initiated. These new APMP comparisons cover the temperature range from to . Three radiation thermometers with central wavelengths of 1.6 , 0.9 , and 0.65 are the transfer devices for the radiation thermometer scale comparison conducted in the so-called star configuration. In parallel, a compact fixed-point blackbody furnace that houses six types of fixed-point cells of In, Sn, Zn, Al, Ag, and Cu is circulated, again in a star-type comparison, to substantiate fixed-point calibration capabilities. Twelve APMP national metrology institutes are taking part in this endeavor, in which the National Metrology Institute of Japan acts as the pilot. In this article, the comparison scheme is described with emphasis on the features of the transfer devices, i.e., the radiation thermometers and the fixed-point blackbodies. Results of preliminary evaluations of the performance and characteristic of these instruments as well as the evaluation method of the comparison results are presented.

  5. Comparison of Satellite Surveying to Traditional Surveying Methods for the Resources Industry

    NASA Astrophysics Data System (ADS)

    Osborne, B. P.; Osborne, V. J.; Kruger, M. L.

    Modern ground-based survey methods involve detailed survey, which provides three-space co-ordinates for surveyed points, to a high level of accuracy. The instruments are operated by surveyors, who process the raw results to create survey location maps for the subject of the survey. Such surveys are conducted for a location or region and referenced to the earth global co- ordinate system with global positioning system (GPS) positioning. Due to this referencing the survey is only as accurate as the GPS reference system. Satellite survey remote sensing utilise satellite imagery which have been processed using commercial geographic information system software. Three-space co-ordinate maps are generated, with an accuracy determined by the datum position accuracy and optical resolution of the satellite platform.This paper presents a case study, which compares topographic surveying undertaken by traditional survey methods with satellite surveying, for the same location. The purpose of this study is to assess the viability of satellite remote sensing for surveying in the resources industry. The case study involves a topographic survey of a dune field for a prospective mining project area in Pakistan. This site has been surveyed using modern surveying techniques and the results are compared to a satellite survey performed on the same area.Analysis of the results from traditional survey and from the satellite survey involved a comparison of the derived spatial co- ordinates from each method. In addition, comparisons have been made of costs and turnaround time for both methods.The results of this application of remote sensing is of particular interest for survey in areas with remote and extreme environments, weather extremes, political unrest, poor travel links, which are commonly associated with mining projects. Such areas frequently suffer language barriers, poor onsite technical support and resources.

  6. Development of an analytical microbial consortia method for enhancing performance monitoring at aerobic wastewater treatment plants.

    PubMed

    Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling

    2012-01-01

    An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.

  7. Direct risk standardisation: a new method for comparing casemix adjusted event rates using complex models

    PubMed Central

    2013-01-01

    Background Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as “practically impossible”, and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Methods Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Results Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Conclusions Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons. PMID:24168424

  8. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  9. A Mixed-Methods Comparison of Classroom Context during Food, Health & Choices, a Childhood Obesity Prevention Intervention

    ERIC Educational Resources Information Center

    Burgermaster, Marissa; Koroly, Jenna; Contento, Isobel; Koch, Pamela; Gray, Heewon L.

    2017-01-01

    Background: Schools are frequent settings for childhood obesity prevention; however, intervention results are mixed. Classroom context may hold important clues to improving these interventions. Methods: We used mixed methods to examine classroom context during a curriculum intervention taught by trained instructors in fifth grade classrooms. We…

  10. Comparison of various methods to determine bulk specific gravity of cores : an investigation of high values using AASHTO T275 - paraffin-coated method.

    DOT National Transportation Integrated Search

    2012-07-01

    A report from a MoDOT asphalt paving project was that unexpected results were obtained when adhering to the standard for determination of bulk specific gravity of compacted asphalt mixture (Gmb) specimens, AASHTO T 166. The test method requires speci...

  11. Evaluation of Lightning Induced Effects in a Graphite Composite Fairing Structure. Parts 1 and 2

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Defining the electromagnetic environment inside a graphite composite fairing due to lightning is of interest to spacecraft developers. This paper is the first in a two part series and studies the shielding effectiveness of a graphite composite model fairing using derived equivalent properties. A frequency domain Method of Moments (MoM) model is developed and comparisons are made with shielding test results obtained using a vehicle-like composite fairing. The comparison results show that the analytical models can adequately predict the test results. Both measured and model data indicate that graphite composite fairings provide significant attenuation to magnetic fields as frequency increases. Diffusion effects are also discussed. Part 2 examines the time domain based effects through the development of a loop based induced field testing and a Transmission-Line-Matrix (TLM) model is developed in the time domain to study how the composite fairing affects lightning induced magnetic fields. Comparisons are made with shielding test results obtained using a vehicle-like composite fairing in the time domain. The comparison results show that the analytical models can adequately predict the test and industry results.

  12. Application of solid/liquid extraction for the gravimetric determination of lipids in royal jelly.

    PubMed

    Antinelli, Jean-François; Davico, Renée; Rognone, Catherine; Faucon, Jean-Paul; Lizzani-Cuvelier, Louisette

    2002-04-10

    Gravimetric lipid determination is a major parameter for the characterization and the authentication of royal jelly quality. A solid/liquid extraction was compared to the reference method, which is based on liquid/liquid extraction. The amount of royal jelly and the time of the extraction were optimized in comparison to the reference method. Boiling/rinsing ratio and spread of royal jelly onto the extraction thimble were identified as critical parameters, resulting in good accuracy and precision for the alternative method. Comparison of reproducibility and repeatability of both methods associated with gas chromatographic analysis of the composition of the extracted lipids showed no differences between the two methods. As the intra-laboratory validation tests were comparable to the reference method, while offering rapidity and a decrease in amount of solvent used, it was concluded that the proposed method should be used with no modification of quality criteria and norms established for royal jelly characterization.

  13. EURAMET.M.P-S9: comparison in the negative gauge pressure range -950 to 0 hPa

    NASA Astrophysics Data System (ADS)

    Saxholm, S.; Otal, P.; AltintaS, A.; Bermanec, L. G.; Durgut, Y.; Hanrahan, R.; Kocas, I.; Lefkopoulos, A.; Pražák, D.; Sandu, I.; Åetina, J.; Spohr, I.; Steindl, D.; Tammik, K.; Testa, N.

    2016-01-01

    A comparison in the negative gauge pressure range was arranged in the period 2011 - 2012. A total of 14 laboratories participated in this comparison: BEV (Austria), CMI (Czech Republic), DANIAmet-FORCE (Denmark), EIM (Greece), HMI/FSB-LPM (Croatia), INM (Romania), IPQ (Portugal), LNE (France), MCCAA (Malta), METROSERT (Estonia), MIKES (Finland), MIRS/IMT/LMT (Slovenia), NSAI (Ireland) and UME (Turkey). The project was divided into two loops: Loop1, piloted by MIKES, and Loop2, piloted by LNE. The results of the two loops are reported separately: Loop1 results are presented in this paper. The transfer standard was Beamex MC5 no. 25516865 with internal pressure module INT1C, resolution 0.01 hPa. The nominal pressure range of the INT1C is -1000 hPa to +1000 hPa. The nominal pressure points for the comparison were 0 hPa, -200 hPa, -400 hPa, -600 hPa, -800 hPa and -950 hPa. The reference values and their uncertainties as well as the difference uncertainty between the laboratory results and the reference values were determined from the measurement data by Monte Carlo simulations. Stability uncertainty of the transfer standard was included in the final difference uncertainty. Degrees of equivalences and mutual equivalences between the laboratories were calculated. Each laboratory reported results for all twelve measurement points, which means that there were 168 reported values in total. Some 163 of the 168 values (97 %) agree with the reference values within the expanded uncertainties, with a coverage factor k = 2. Among the laboratories, four different methods were used to determine negative gauge pressure. It is concluded that special attention must be paid to the measurements and methods when measuring negative gauge pressures. There might be a need for a technical guide or a workshop that provides information about details and practices related to the measurements of negative gauge pressure, as well as differences between the different methods. The comparison is registered as EURAMET project no. 1170 and as a supplementary comparison EURAMET.M.P-S9 in the BIPM key comparison database. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  14. Comparison of RCS prediction techniques, computations and measurements

    NASA Astrophysics Data System (ADS)

    Brand, M. G. E.; Vanewijk, L. J.; Klinker, F.; Schippers, H.

    1992-07-01

    Three calculation methods to predict radar cross sections (RCS) of three dimensional objects are evaluated by computing the radar cross sections of a generic wing inlet configuration. The following methods are applied: a three dimensional high frequency method, a three dimensional boundary element method, and a two dimensional finite difference time domain method. The results of the computations are compared with the data of measurements.

  15. Faecal indicator bacteria enumeration in beach sand: A comparison study of extraction methods in medium to coarse sands

    USGS Publications Warehouse

    Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.

    2009-01-01

    Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.

  16. An interlaboratory comparison of sediment elutriate preparation and toxicity test methods

    EPA Science Inventory

    Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...

  17. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    NASA Astrophysics Data System (ADS)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  18. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  19. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  20. Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew T.; Hsu, Andrew T.

    1994-01-01

    In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.

  1. Comparison of Methods for Determining the Mechanical Properties of Semiconducting Polymer Films for Stretchable Electronics.

    PubMed

    Rodriquez, Daniel; Kim, Jae-Han; Root, Samuel E; Fei, Zhuping; Boufflet, Pierre; Heeney, Martin; Kim, Taek-Soo; Lipomi, Darren J

    2017-03-15

    This paper describes a comparison of two characterization techniques for determining the mechanical properties of thin-film organic semiconductors for applications in soft electronics. In the first method, the film is supported by water (film-on-water, FOW), and a stress-strain curve is obtained using a direct tensile test. In the second method, the film is supported by an elastomer (film-on-elastomer, FOE), and is subjected to three tests to reconstruct the key features of the stress-strain curve: the buckling test (tensile modulus), the onset of buckling (yield point), and the crack-onset strain (strain at fracture). The specimens used for the comparison are four poly(3-hexylthiophene) (P3HT) samples of increasing molecular weight (M n = 15, 40, 63, and 80 kDa). The methods produced qualitatively similar results for mechanical properties including the tensile modulus, the yield point, and the strain at fracture. The agreement was not quantitative because of differences in mode of loading (tension vs compression), strain rate, and processing between the two methods. Experimental results are corroborated by coarse-grained molecular dynamics simulations, which lead to the conclusion that in low molecular weight samples (M n = 15 kDa), fracture occurs by chain pullout. Conversely, in high molecular weight samples (M n > 25 kDa), entanglements concentrate the stress to few chains; this concentration is consistent with chain scission as the dominant mode of fracture. Our results provide a basis for comparing mechanical properties that have been measured by these two techniques, and provide mechanistic insight into fracture modes in this class of materials.

  2. CLT and AE methods of in-situ load testing : comparison and development of evaluation criteria : in-situ evaluation of post-tensioned parking garage, Kansas City, Missouri

    DOT National Transportation Integrated Search

    2008-02-01

    The objective of the proposed research project is to compare the results of two recently introduced nondestructive load test methods to the existing 24-hour load test method described in Chapter 20 of ACI 318-05. The two new methods of nondestructive...

  3. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    NASA Astrophysics Data System (ADS)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  4. Performance Comparison of SDN Solutions for Switching Dedicated Long-Haul Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S

    2016-01-01

    We consider scenarios with two sites connected over a dedicated, long-haul connection that must quickly fail-over in response to degradations in host-to-host application performance. We present two methods for path fail-over using OpenFlowenabled switches: (a) a light-weight method that utilizes host scripts to monitor the application performance and dpctl API for switching, and (b) a generic method that uses two OpenDaylight (ODL) controllers and REST interfaces. The restoration dynamics of the application contain significant statistical variations due to the controllers, north interfaces and switches; in addition, the variety of vendor implementations further complicates the choice between different solutions. We presentmore » the impulse-response method to estimate the regressions of performance parameters, which enables a rigorous and objective comparison of different solutions. We describe testing results of the two methods, using TCP throughput and connection rtt as main parameters, over a testbed consisting of HP and Cisco switches connected over longhaul connections emulated in hardware by ANUE devices. The combination of analytical and experimental results demonstrates that dpctl method responds seconds faster than ODL method on average, while both methods restore TCP throughput.« less

  5. Implementation and Validation of an Impedance Eduction Technique

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Gerhold, Carl H.

    2011-01-01

    Implementation of a pressure gradient method of impedance eduction in two NASA Langley flow ducts is described. The Grazing Flow Impedance Tube only supports plane-wave sources, while the Curved Duct Test Rig supports sources that contain higher-order modes. Multiple exercises are used to validate this new impedance eduction method. First, synthesized data for a hard wall insert and a conventional liner mounted in the Grazing Flow Impedance Tube are used as input to the two impedance eduction methods, the pressure gradient method and a previously validated wall pressure method. Comparisons between the two results are excellent. Next, data measured in the Grazing Flow Impedance Tube are used as input to both methods. Results from the two methods compare quite favorably for sufficiently low Mach numbers but this comparison degrades at Mach 0.5, especially when the hard wall insert is used. Finally, data measured with a hard wall insert mounted in the Curved Duct Test Rig are used as input to the pressure gradient method. Significant deviation from the known solution is observed, which is believed to be largely due to 3-D effects in this flow duct. Potential solutions to this issue are currently being explored.

  6. Final report on APMP.RF-S21.F

    NASA Astrophysics Data System (ADS)

    Ishii, Masanori; Kim, Jeong Hwan; Ji, Yu; Cho, Chi Hyun; Zhang, Tim

    2018-01-01

    The supplementary comparison report APMP.RF-S21.F describes the comparison of loop antennas, which was conducted between April 2013 and January 2014. The two comparison artefacts were well-characterised active loop antennas of diameter 30 cm and 60 cm respectively, which typically operate in a frequency range from 9 kHz to 30 MHz. These antennas represent the main groups of antennas which are used around the world for EMC measurements in the frequency range below 30 MHz. There are several well-known methods for calibrating the antenna factor of these devices. The calibration systems used in this comparison for the loop antennas employed the standard magnetic field method or the three-antenna method. Despite the limitations of the algorithm, which we used to derive the reference value for each case (particularly for small samples), the actual calculated reference values seem to be reasonable. As a result, the agreement between each participant was very good in all cases. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  7. Comparison of transect sampling and object-oriented image classification methods of urbanizing catchments

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Tenenbaum, D. E.

    2009-12-01

    The process of urbanization has major effects on both human and natural systems. In order to monitor these changes and better understand how urban ecological systems work, urban spatial structure and the variation needs to be first quantified at a fine scale. Because the land-use and land-cover (LULC) in urbanizing areas is highly heterogeneous, the classification of urbanizing environments is the most challenging field in remote sensing. Although a pixel-based method is a common way to do classification, the results are not good enough for many research objectives which require more accurate classification data in fine scales. Transect sampling and object-oriented classification methods are more appropriate for urbanizing areas. Tenenbaum used a transect sampling method using a computer-based facility within a widely available commercial GIS in the Glyndon Catchment and the Upper Baismans Run Catchment, Baltimore, Maryland. It was a two-tiered classification system, including a primary level (which includes 7 classes) and a secondary level (which includes 37 categories). The statistical information of LULC was collected. W. Zhou applied an object-oriented method at the parcel level in Gwynn’s Falls Watershed which includes the two previously mentioned catchments and six classes were extracted. The two urbanizing catchments are located in greater Baltimore, Maryland and drain into Chesapeake Bay. In this research, the two different methods are compared for 6 classes (woody, herbaceous, water, ground, pavement and structure). The comparison method uses the segments in the transect method to extract LULC information from the results of the object-oriented method. Classification results were compared in order to evaluate the difference between the two methods. The overall proportions of LULC classes from the two studies show that there is overestimation of structures in the object-oriented method. For the other five classes, the results from the two methods are similar, except for a difference in the proportions of the woody class. The segment to segment comparison shows that the resolution of the light detection and ranging (LIDAR) data used in the object-oriented method does affect the accuracy of the classification. Shadows of trees and structures are still a big problem in the object-oriented method. For classes that make up a small proportion of the catchments, such as water, neither method was capable of detecting them.

  8. Light-transmittance predictions under multiple-light-scattering conditions. I. Direct problem: hybrid-method approximation.

    PubMed

    Czerwiński, M; Mroczka, J; Girasole, T; Gouesbet, G; Gréhan, G

    2001-03-20

    Our aim is to present a method of predicting light transmittances through dense three-dimensional layered media. A hybrid method is introduced as a combination of the four-flux method with coefficients predicted from a Monte Carlo statistical model to take into account the actual three-dimensional geometry of the problem under study. We present the principles of the hybrid method, some exemplifying results of numerical simulations, and their comparison with results obtained from Bouguer-Lambert-Beer law and from Monte Carlo simulations.

  9. Comparison of Control Group Generating Methods.

    PubMed

    Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes

    2017-01-01

    Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.

  10. Reflotron cholesterol measurement in general practice: accuracy and detection of errors.

    PubMed

    Ball, M J; Robertson, I K; Woods, M

    1994-11-01

    Comparison of cholesterol determinations by nurses using a Reflotron analyser in a general practice setting showed a good correlation with plasma cholesterol determinations by wet chemistry in a clinical biochemistry laboratory. A limited number of comparisons did, however, give a much lower result on the Reflotron. In an experimental situation, small sample volumes (which could result from poor technique) were shown to produce falsely low readings. A simple method which may immediately detect falsely low Reflotron readings is discussed.

  11. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    NASA Astrophysics Data System (ADS)

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-12-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34-80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics.

  12. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    PubMed Central

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-01-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34–80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics. PMID:28000704

  13. AUPress: A Comparison of an Open Access University Press with Traditional Presses

    ERIC Educational Resources Information Center

    McGreal, Rory; Chen, Nian-Shing

    2011-01-01

    This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way…

  14. A Method for the Comparison of Item Selection Rules in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose

    2010-01-01

    In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…

  15. Comparison of heat and mass transfer of different microwave-assisted extraction methods of essential oil from Citrus limon (Lisbon variety) peel.

    PubMed

    Golmakani, Mohammad-Taghi; Moayyedi, Mahsa

    2015-11-01

    Dried and fresh peels of Citrus limon were subjected to microwave-assisted hydrodistillation (MAHD) and solvent-free microwave extraction (SFME), respectively. A comparison was made between MAHD and SFME with the conventional hydrodistillation (HD) method in terms of extraction kinetic, chemical composition, and antioxidant activity. Higher yield results from higher extraction rates by microwaves and could be due to a synergy of two transfer phenomena: mass and heat acting in the same way. Gas chromatography/mass spectrometry (GC/MS) analysis did not indicate any noticeable differences between the constituents of essential oils obtained by MAHD and SFME, in comparison with HD. Antioxidant analysis of the extracted essential oils indicated that microwave irradiation did not have adverse effects on the radical scavenging activity of the extracted essential oils. The results of this study suggest that MAHD and SFME can be termed as green technologies because of their less energy requirements per ml of essential oil extraction.

  16. An Investigation into the Application of Generalized Differential Quadrature Method to Bending Analysis of Composite Sandwich Plates

    NASA Astrophysics Data System (ADS)

    Ghassemi, Aazam; Yazdani, Mostafa; Hedayati, Mohamad

    2017-12-01

    In this work, based on the First Order Shear Deformation Theory (FSDT), an attempt is made to explore the applicability and accuracy of the Generalized Differential Quadrature Method (GDQM) for bending analysis of composite sandwich plates under static loading. Comparative studies of the bending behavior of composite sandwich plates are made between two types of boundary conditions for different cases. The effects of fiber orientation, ratio of thickness to length of the plate, the ratio of thickness of core to thickness of the face sheet are studied on the transverse displacement and moment resultants. As shown in this study, the role of the core thickness in deformation of these plates can be reversed by the stiffness of the core in comparison with sheets. The obtained graphs give very good results due to optimum design of sandwich plates. In Comparison with existing solutions, fast convergent rates and high accuracy results can be achieved by the GDQ method.

  17. Molecular structure, FT-IR, FT-Raman, NMR studies and first order molecular hyperpolarizabilities by the DFT method of mirtazapine and its comparison with mianserin

    NASA Astrophysics Data System (ADS)

    Sagdinc, Seda G.; Sahinturk, Ayse Erbay

    2013-03-01

    Mirtazapine (±)-1,2,3,4,10,14b-hexahydro-2-methylpyrazino(2,1-a)pyrido(2,3-c)(2)benzazepine is a compound with antidepressant therapeutic effects. It is the 6-aza derivative of the tetracyclic antidepressant mianserin (±)-2-methyl-1,2,3,4,10,14b-hexahydrodibenzo[c,f]pyrazino[1,2-a]azepine. The FT-IR and FT-Raman spectra of mirtazapine have been recorded in 4000-400 cm-1 and 3500-10 cm-1, respectively. The optimized geometry, energies, nonlinear optical properties, vibrational frequencies, 13C, 1H and 15N NMR chemical shift values of mirtazapine have been determined using the density functional theory (DFT/B3LYP) method. A comparison of the experimental and theoretical results of mirtazapine indicates that the density-functional B3LYP method is able to provide satisfactory results for predicting vibrational and NMR properties. The experimental and calculated results for mirtazapine have also been compared with mianserin.

  18. Analysis of Wien filter spectra from Hall thruster plumes.

    PubMed

    Huang, Wensheng; Shastry, Rohit

    2015-07-01

    A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.

  19. Application of CFD to a generic hypersonic flight research study

    NASA Technical Reports Server (NTRS)

    Green, Michael J.; Lawrence, Scott L.; Dilley, Arthur D.; Hawkins, Richard W.; Walker, Mary M.; Oberkampf, William L.

    1993-01-01

    Computational analyses have been performed for the initial assessment of flight research vehicle concepts that satisfy requirements for potential hypersonic experiments. Results were obtained from independent analyses at NASA Ames, NASA Langley, and Sandia National Labs, using sophisticated time-dependent Navier-Stokes and parabolized Navier-Stokes methods. Careful study of a common problem consisting of hypersonic flow past a slightly blunted conical forebody was undertaken to estimate the level of uncertainty in the computed results, and to assess the capabilities of current computational methods for predicting boundary-layer transition onset. Results of this study in terms of surface pressure and heat transfer comparisons, as well as comparisons of boundary-layer edge quantities and flow-field profiles are presented here. Sensitivities to grid and gas model are discussed. Finally, representative results are presented relating to the use of Computational Fluid Dynamics in the vehicle design and the integration/support of potential experiments.

  20. 3D documentation of footwear impressions and tyre tracks in snow with high resolution optical surface scanning.

    PubMed

    Buck, Ursula; Albertini, Nicola; Naether, Silvio; Thali, Michael J

    2007-09-13

    The three-dimensional documentation of footwear and tyre impressions in snow offers an opportunity to capture additional fine detail for the identification as present photographs. For this approach, up to now, different casting methods have been used. Casting of footwear impressions in snow has always been a difficult assignment. This work demonstrates that for the three-dimensional documentation of impressions in snow the non-destructive method of 3D optical surface scanning is suitable. The new method delivers more detailed results of higher accuracy than the conventional casting techniques. The results of this easy to use and mobile 3D optical surface scanner were very satisfactory in different meteorological and snow conditions. The method is also suitable for impressions in soil, sand or other materials. In addition to the side by side comparison, the automatic comparison of the 3D models and the computation of deviations and accuracy of the data simplify the examination and delivers objective and secure results. The results can be visualized efficiently. Data exchange between investigating authorities at a national or an international level can be achieved easily with electronic data carriers.

  1. Characterization of toners and inkjets by laser ablation spectrochemical methods and Scanning Electron Microscopy-Energy Dispersive X-ray Spectroscopy

    NASA Astrophysics Data System (ADS)

    Trejos, Tatiana; Corzo, Ruthmara; Subedi, Kiran; Almirall, José

    2014-02-01

    Detection and sourcing of counterfeit currency, examination of counterfeit security documents and determination of authenticity of medical records are examples of common forensic document investigations. In these cases, the physical and chemical composition of the ink entries can provide important information for the assessment of the authenticity of the document or for making inferences about common source. Previous results reported by our group have demonstrated that elemental analysis, using either Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry (LA-ICP-MS) or Laser Ablation Induced Breakdown Spectroscopy (LIBS), provides an effective, practical and robust technique for the discrimination of document substrates and writing inks with minimal damage to the document. In this study, laser-based methods and Scanning Electron Microscopy-Energy Dispersive X-Ray Spectroscopy (SEM-EDS) methods were developed, optimized and validated for the forensic analysis of more complex inks such as toners and inkjets, to determine if their elemental composition can differentiate documents printed from different sources and to associate documents that originated from the same printing source. Comparison of the performance of each of these methods is presented, including the analytical figures of merit, discrimination capability and error rates. Different calibration strategies resulting in semi-quantitative and qualitative analysis, comparison methods (match criteria) and data analysis and interpretation tools were also developed. A total of 27 black laser toners originating from different manufacturing sources and/or batches were examined to evaluate the discrimination capability of each method. The results suggest that SEM-EDS offers relatively poor discrimination capability for this set (~ 70.7% discrimination of all the possible comparison pairs or a 29.3% type II error rate). Nonetheless, SEM-EDS can still be used as a complementary method of analysis since it has the advantage of being non-destructive to the sample in addition to providing imaging capabilities to further characterize toner samples by their particle morphology. Laser sampling methods resulted in an improvement of the discrimination between different sources with LIBS producing 89% discrimination and LA-ICP-MS resulting in 100% discrimination. In addition, a set of 21 black inkjet samples was examined by each method. The results show that SEM-EDS is not appropriate for inkjet examinations since their elemental composition is typically below the detection capabilities with only sulfur detected in this set, providing only 47.4% discrimination between possible comparison pairs. Laser sampling methods were shown to provide discrimination greater than 94% for this same inkjet set with false exclusion and false inclusion rates lower than 4.1% and 5.7%, for LA-ICP-MS and LIBS respectively. Overall these results confirmed the utility of the examination of printed documents by laser-based micro-spectrochemical methods. SEM-EDS analysis of toners produced a limited utility for discrimination within sources but was not an effective tool for inkjet ink discrimination. Both LA-ICP-MS and LIBS can be used in forensic laboratories to chemically characterize inks on documents and to complement the information obtained by conventional methods and enhance their evidential value.

  2. Inertial Response of Wind Power Plants: A Comparison of Frequency-Based Inertial Control and Stepwise Inertial Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiao; Gao, Wenzhong; Wang, Jianhui

    The frequency regulation capability of a wind power plant plays an important role in enhancing frequency reliability especially in an isolated power system with high wind power penetration levels. A comparison of two types of inertial control methods, namely frequency-based inertial control (FBIC) and stepwise inertial control (SIC), is presented in this paper. Comprehensive case studies are carried out to reveal features of the different inertial control methods, simulated in a modified Western System Coordination Council (WSCC) nine-bus power grid using real-time digital simulator (RTDS) platform. The simulation results provide an insight into the inertial control methods under various scenarios.

  3. Steady and unsteady aerodynamic forces from the SOUSSA surface-panel method for a fighter wing with tip missile and comparison with experiment and PANAIR

    NASA Technical Reports Server (NTRS)

    Cunningham, Herbert J.

    1987-01-01

    The body surface-panel method SOUSSA is applied to calculate steady and unsteady lift and pitching moment coefficients on a thin fighter-type wing model with and without a tip-mounted missile. Comparisons are presented with experimental results and with PANAIR and PANAIR-related calculations for Mach numbers from 0.6 to 0.9. In general the SOUSSA program, the experiments, and the PANAIR (and related) programs give lift and pitching-moment results which agree at least fairly well, except for the unsteady clean-wing experimental moment and the unsteady moment on the wing tip body calculated by a PANAIR-predecessor program at a Mach number of 0.8.

  4. Reflectivity of 1D photonic crystals: A comparison of computational schemes with experimental results

    NASA Astrophysics Data System (ADS)

    Pérez-Huerta, J. S.; Ariza-Flores, D.; Castro-García, R.; Mochán, W. L.; Ortiz, G. P.; Agarwal, V.

    2018-04-01

    We report the reflectivity of one-dimensional finite and semi-infinite photonic crystals, computed through the coupling to Bloch modes (BM) and through a transfer matrix method (TMM), and their comparison to the experimental spectral line shapes of porous silicon (PS) multilayer structures. Both methods reproduce a forbidden photonic bandgap (PBG), but slowly-converging oscillations are observed in the TMM as the number of layers increases to infinity, while a smooth converged behavior is presented with BM. The experimental reflectivity spectra is in good agreement with the TMM results for multilayer structures with a small number of periods. However, for structures with large amount of periods, the measured spectral line shapes exhibit better agreement with the smooth behavior predicted by BM.

  5. Blind shear-wave velocity comparison of ReMi and MASW results with boreholes to 200 m in Santa Clara Valley: Implications for earthquake ground-motion assessment

    USGS Publications Warehouse

    Stephenson, W.J.; Louie, J.N.; Pullammanappallil, S.; Williams, R.A.; Odum, J.K.

    2005-01-01

    Multichannel analysis of surface waves (MASW) and refraction microtremor (ReMi) are two of the most recently developed surface acquisition techniques for determining shallow shear-wave velocity. We conducted a blind comparison of MASW and ReMi results with four boreholes logged to at least 260 m for shear velocity in Santa Clara Valley, California, to determine how closely these surface methods match the downhole measurements. Average shear-wave velocity estimates to depths of 30, 50, and 100 m demonstrate that the surface methods as implemented in this study can generally match borehole results to within 15% to these depths. At two of the boreholes, the average to 100 m depth was within 3%. Spectral amplifications predicted from the respective borehole velocity profiles similarly compare to within 15 % or better from 1 to 10 Hz with both the MASW and ReMi surface-method velocity profiles. Overall, neither surface method was consistently better at matching the borehole velocity profiles or amplifications. Our results suggest MASW and ReMi surface acquisition methods can both be appropriate choices for estimating shearwave velocity and can be complementary to each other in urban settings for hazards assessment.

  6. Advanced Image Processing for Defect Visualization in Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1997-01-01

    Results of a defect visualization process based on pulse infrared thermography are presented. Algorithms have been developed to reduce the amount of operator participation required in the process of interpreting thermographic images. The algorithms determine the defect's depth and size from the temporal and spatial thermal distributions that exist on the surface of the investigated object following thermal excitation. A comparison of the results from thermal contrast, time derivative, and phase analysis methods for defect visualization are presented. These comparisons are based on three dimensional simulations of a test case representing a plate with multiple delaminations. Comparisons are also based on experimental data obtained from a specimen with flat bottom holes and a composite panel with delaminations.

  7. Comparing physiographic maps with different categorisations

    NASA Astrophysics Data System (ADS)

    Zawadzka, J.; Mayr, T.; Bellamy, P.; Corstanje, R.

    2015-02-01

    This paper addresses the need for a robust map comparison method suitable for finding similarities between thematic maps with different forms of categorisations. In our case, the requirement was to establish the information content of newly derived physiographic maps with regards to set of reference maps for a study area in England and Wales. Physiographic maps were derived from the 90 m resolution SRTM DEM, using a suite of existing and new digital landform mapping methods with the overarching purpose of enhancing the physiographic unit component of the Soil and Terrain database (SOTER). Reference maps were seven soil and landscape datasets mapped at scales ranging from 1:200,000 to 1:5,000,000. A review of commonly used statistical methods for categorical comparisons was performed and of these, the Cramer's V statistic was identified as the most appropriate for comparison of maps with different legends. Interpretation of multiple Cramer's V values resulting from one-by-one comparisons of the physiographic and baseline maps was facilitated by multi-dimensional scaling and calculation of average distances between the maps. The method allowed for finding similarities and dissimilarities amongst physiographic maps and baseline maps and informed the recommendation of the most suitable methodology for terrain analysis in the context of soil mapping.

  8. Calibration and comparison of the acoustic location methods used during the spring migration of the bowhead whale, Balaena mysticetus, off Pt. Barrow, Alaska, 1984-1993.

    PubMed

    Clark, C W; Ellison, W T

    2000-06-01

    Between 1984 and 1993, visual and acoustic methods were combined to census the Bering-Chukchi-Beaufort bowhead whale, Balaena mysticetus, population. Passive acoustic location was based on arrival-time differences of transient bowhead sounds detected on sparse arrays of three to five hydrophones distributed over distances of 1.5-4.5 km along the ice edge. Arrival-time differences were calculated from either digital cross correlation of spectrograms (old method), or digital cross correlation of time waveforms (new method). Acoustic calibration was conducted in situ in 1985 at five sites with visual site position determined by triangulation using two theodolites. The discrepancy between visual and acoustic locations was <1%-5% of visual range and less than 0.7 degrees of visual bearing for either method. Comparison of calibration results indicates that the new method yielded slightly more precise and accurate positions than the old method. Comparison of 217 bowhead whale call locations from both acoustic methods showed that the new method was more precise, with location errors 3-4 times smaller than the old method. Overall, low-frequency bowhead transients were reliably located out to ranges of 3-4 times array size. At these ranges in shallow water, signal propagation appears to be dominated by the fundamental mode and is not corrupted by multipath.

  9. COMPARISON OF LARGE RIVER SAMPLING METHOD USING DIATOM METRICS

    EPA Science Inventory

    We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...

  10. COMPARISON OF LARGE RIVER SAMPLING METHODS ON ALGAL METRICS

    EPA Science Inventory

    We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...

  11. Energetics of protein-DNA interactions.

    PubMed

    Donald, Jason E; Chen, William W; Shakhnovich, Eugene I

    2007-01-01

    Protein-DNA interactions are vital for many processes in living cells, especially transcriptional regulation and DNA modification. To further our understanding of these important processes on the microscopic level, it is necessary that theoretical models describe the macromolecular interaction energetics accurately. While several methods have been proposed, there has not been a careful comparison of how well the different methods are able to predict biologically important quantities such as the correct DNA binding sequence, total binding free energy and free energy changes caused by DNA mutation. In addition to carrying out the comparison, we present two important theoretical models developed initially in protein folding that have not yet been tried on protein-DNA interactions. In the process, we find that the results of these knowledge-based potentials show a strong dependence on the interaction distance and the derivation method. Finally, we present a knowledge-based potential that gives comparable or superior results to the best of the other methods, including the molecular mechanics force field AMBER99.

  12. A comparison of plan-based and abstract MDP reward shaping

    NASA Astrophysics Data System (ADS)

    Efthymiadis, Kyriakos; Kudenko, Daniel

    2014-01-01

    Reward shaping has been shown to significantly improve an agent's performance in reinforcement learning. As attention is shifting away from tabula-rasa approaches many different reward shaping methods have been developed. In this paper, we compare two different methods for reward shaping; plan-based, in which an agent is provided with a plan and extra rewards are given according to the steps of the plan the agent satisfies, and reward shaping via abstract Markov decision process (MDPs), in which an abstract high-level MDP of the environment is solved and the resulting value function is used to shape the agent. The comparison is conducted in terms of total reward, convergence speed and scaling up to more complex environments. Empirical results demonstrate the need to correctly select and set up reward shaping methods according to the needs of the environment the agents are acting in. This leads to the more interesting question, is there a reward shaping method which is universally better than all other approaches regardless of the environment dynamics?

  13. Feature Selection in Order to Extract Multiple Sclerosis Lesions Automatically in 3D Brain Magnetic Resonance Images Using Combination of Support Vector Machine and Genetic Algorithm.

    PubMed

    Khotanlou, Hassan; Afrasiabi, Mahlagha

    2012-10-01

    This paper presents a new feature selection approach for automatically extracting multiple sclerosis (MS) lesions in three-dimensional (3D) magnetic resonance (MR) images. Presented method is applicable to different types of MS lesions. In this method, T1, T2, and fluid attenuated inversion recovery (FLAIR) images are firstly preprocessed. In the next phase, effective features to extract MS lesions are selected by using a genetic algorithm (GA). The fitness function of the GA is the Similarity Index (SI) of a support vector machine (SVM) classifier. The results obtained on different types of lesions have been evaluated by comparison with manual segmentations. This algorithm is evaluated on 15 real 3D MR images using several measures. As a result, the SI between MS regions determined by the proposed method and radiologists was 87% on average. Experiments and comparisons with other methods show the effectiveness and the efficiency of the proposed approach.

  14. Comparison of two stand-alone CADe systems at multiple operating points

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.

  15. A multi-modal solution for the transmission of sound in nonuniform ducts

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1976-01-01

    The method of weighted residuals in the form of a modified Galerkin method with boundary residuals is developed for the study of the transmission of sound in nonuniform ducts carrying a steady compressible flow. In this formulation the steady flow is modeled as essentially one-dimensional but with a kinematic modification to force tangency of the flow at the duct walls. Good agreement has been obtained with known results for transmission and reflection coefficients in hard walled ducts up to near sonic velocities. In ducts with acoustically compliant boundaries good comparisons have been more difficult to achieve except at low Mach numbers. The problem of transmission in a straight, acoustically treated duct with a uniform flow has been formulated and the Galerkin method used with basis functions derived from the case when flow is absent. Results indicate that favorable comparisons with exact computations can be obtained if care is taken in choosing the basis functions.

  16. Full non-linear treatment of the global thermospheric wind system. I - Mathematical method and analysis of forces. II - Results and comparison with observations

    NASA Technical Reports Server (NTRS)

    Blum, P. W.; Harris, I.

    1975-01-01

    The equations of horizontal motion of the neutral atmosphere between 120 and 500 km are integrated with the inclusion of all nonlinear terms of the convective derivative and the viscous forces due to vertical and horizontal velocity gradients. Empirical models of the distribution of neutral and charged particles are assumed to be known. The model of velocities developed is a steady state model. In Part I the mathematical method used in the integration of the Navier-Stokes equations is described and the various forces are analyzed. Results of the method given in Part I are presented with comparison with previous calculations and observations of upper atmospheric winds. Conclusions are that nonlinear effects are only significant in the equatorial region, especially at solstice conditions and that nonlinear effects do not produce any superrotation.

  17. Automated UMLS-Based Comparison of Medical Forms

    PubMed Central

    Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard

    2013-01-01

    Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far – to our knowledge – an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms. PMID:23861827

  18. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  19. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  20. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data.

    PubMed

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-05

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Models of convection-driven tectonic plates - A comparison of methods and results

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.

    1992-01-01

    Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.

  2. Comparison of Three Different Methods for Pile Integrity Testing on a Cylindrical Homogeneous Polyamide Specimen

    NASA Astrophysics Data System (ADS)

    Lugovtsova, Y. D.; Soldatov, A. I.

    2016-01-01

    Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

  3. Validation of a simple, manual urinary iodine method for estimating the prevalence of iodine-deficiency disorders, and interlaboratory comparison with other methods.

    PubMed

    May, S L; May, W A; Bourdoux, P P; Pino, S; Sullivan, K M; Maberly, G F

    1997-05-01

    The measurement of urinary iodine in population-based surveys provides a biological indicator of the severity of iodine-deficiency disorders. We describe the steps performed to validate a simple, inexpensive, manual urinary iodine acid digestion method, and compare the results using this method with those of other urinary iodine methods. Initially, basic performance characteristics were evaluated: the average recovery of added iodine was 100.4 +/- 8.7% (mean +/- SD), within-assay precision (CV) over the assay range 0-0.95 mumol/L (0-12 micrograms/dL) was < 6%, between-assay precision over the same range was < 12%, and assay sensitivity was 0.05 mumol/L (0.6 microgram/dL). There were no apparent effects on the method by thiocyanate, a known interfering substance. In a comparison with five other methods performed in four different laboratories, samples were collected to test the method performance over a wide range of urinary iodine values (0.04-3.7 mumol/L, or 0.5-47 micrograms/dL). There was a high correlation between all methods and the interpretation of the results was consistent. We conclude that the simple, manual acid digestion method is suitable for urinary iodine analysis.

  4. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  5. Hesitant Fuzzy Linguistic Preference Utility Set and Its Application in Selection of Fire Rescue Plans

    PubMed Central

    Si, Guangsen; Xu, Zeshui

    2018-01-01

    Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers’ subjective cognition. In general, different decision-makers’ sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method. PMID:29614019

  6. Hesitant Fuzzy Linguistic Preference Utility Set and Its Application in Selection of Fire Rescue Plans.

    PubMed

    Liao, Huchang; Si, Guangsen; Xu, Zeshui; Fujita, Hamido

    2018-04-03

    Hesitant fuzzy linguistic term set provides an effective tool to represent uncertain decision information. However, the semantics corresponding to the linguistic terms in it cannot accurately reflect the decision-makers' subjective cognition. In general, different decision-makers' sensitivities towards the semantics are different. Such sensitivities can be represented by the cumulative prospect theory value function. Inspired by this, we propose a linguistic scale function to transform the semantics corresponding to linguistic terms into the linguistic preference values. Furthermore, we propose the hesitant fuzzy linguistic preference utility set, based on which, the decision-makers can flexibly express their distinct semantics and obtain the decision results that are consistent with their cognition. For calculations and comparisons over the hesitant fuzzy linguistic preference utility sets, we introduce some distance measures and comparison laws. Afterwards, to apply the hesitant fuzzy linguistic preference utility sets in emergency management, we develop a method to obtain objective weights of attributes and then propose a hesitant fuzzy linguistic preference utility-TOPSIS method to select the best fire rescue plan. Finally, the validity of the proposed method is verified by some comparisons of the method with other two representative methods including the hesitant fuzzy linguistic-TOPSIS method and the hesitant fuzzy linguistic-VIKOR method.

  7. Characterization and taste-masking evaluation of acetaminophen granules: comparison between different preparation methods in a high-shear mixer.

    PubMed

    Albertini, Beatrice; Cavallari, Cristina; Passerini, Nadia; Voinovich, Dario; González-Rodríguez, Marisa L; Magarotto, Lorenzo; Rodriguez, Lorenzo

    2004-02-01

    The aim of this study was to prepare and to investigate acetaminophen taste-masked granules obtained in a high-shear mixer using three different wet granulation methods (method A: water granulation, method B: granulation with a polyvinylpyrrolidone (PVP) binding solution and method C: steam granulation). The studied formulation was: acetaminophen 15%, alpha-lactose monohydrate 30%, cornstarch 45%, polyvinylpyrrolidone K30 5% and orange flavour 5% (w/w). In vitro dissolution studies, performed at pH 6.8, showed that steam granules enabled the lower dissolution rate in comparison to the water and binding solution granules; these results were then confirmed by their lower surface reactivity (D(R)) during the dissolution process. Moreover, the results of the gustatory sensation test performed by six volunteers confirmed the taste-masking effects of the granules, especially steam granules (P<0.001). Morphological, fractal and porosity analysis were then performed to explain the dissolution profiles and the results of the gustatory sensation test. Scanning electron microscopy (SEM) analysis revealed the smoother and the more regular surface of steam granules with respect to the samples obtained using methods A and B; these results were also confirmed by their lower fractal dimension (D(s)) and porosity values. Finally, differential scanning calorimetry (DSC) results showed a shift of the melting point of the drug, which was due to the simple mixing of the components and not to the granulation processes. In conclusion, the steam granulation technique resulted a suitable method to comply the purpose of this work, without modifying the availability of the drug.

  8. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.

  9. Investigation of Multiphase Flow in a Packed Bed Reactor Under Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Lian, Yongsheng; Motil, Brian; Rame, Enrique

    2016-01-01

    In this paper we study the two-phase flow phenomena in a packed bed reactor using an integrated experimental and numerical method. The cylindrical bed is filled with uniformly sized spheres. In the experiment water and air are injected into the bed simultaneously. The pressure distribution along the bed will be measured. The numerical simulation is based on a two-phase flow solver which solves the Navier-Stokes equations on Cartesian grids. A novel coupled level set and moment of fluid method is used to construct the interface. A sequential method is used to position spheres in the cylinder. Preliminary experimental results showed that the tested flow rates resulted in pulse flow. The numerical simulation revealed that air bubbles could merge into larger bubbles and also could break up into smaller bubbles to pass through the pores in the bed. Preliminary results showed that flow passed through regions where the porosity is high. Comparison between the experimental and numerical results in terms of pressure distributions at different flow injection rates will be conducted. Comparison of flow phenomena under terrestrial gravity and microgravity will be made.

  10. A comparison of statistical methods for evaluating matching performance of a biometric identification device: a preliminary report

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona

    2004-08-01

    Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.

  11. Initial test results with single cylinder rhombic drive Stirling engine

    NASA Technical Reports Server (NTRS)

    Cairelli, J. E.

    1978-01-01

    A brief description is given of the GPU 3-2 hardware, the test methods used, and the result of these tests. Comparison is made to unpublished data from similar hydrogen tests performed by the U.S. Army.

  12. International time and frequency comparison using very long baseline interferometer

    NASA Astrophysics Data System (ADS)

    Hama, Shinichi; Yoshino, Taizoh; Kiuchi, Hitoshi; Morikawa, Takao; Sato, Tokuo

    VLBI time comparison experiments using the Kashima station of the Radio Research Laboratory and the Richmond and Maryland Point stations of the U.S. Naval Observatory have been performed since April 1985. A precision of 0.2 ns for the clock offset and 0.2 ps/s for the clock rate have been achieved, and good agreement has been found with GPS results for clock offset. Much higher precision has been found for VLBI time and frequency comparison than that possible with conventional portable clock or Loran-C methods.

  13. Comparison of ALE and SPH Methods for Simulating Mine Blast Effects on Structures

    DTIC Science & Technology

    2010-12-01

    Comparison of ALE and SPH methods for simulating mine blast effects on struc- tures Geneviève Toussaint Amal Bouamoul DRDC Valcartier Defence R&D...Canada – Valcartier Technical Report DRDC Valcartier TR 2010-326 December 2010 Comparison of ALE and SPH methods for simulating mine blast...Valcartier TR 2010-326 iii Executive summary Comparison of ALE and SPH methods for simulating mine blast effects on structures

  14. Comparison of microwave hydrodistillation and solvent-free microwave extraction of essential oil from Melaleuca leucadendra Linn

    NASA Astrophysics Data System (ADS)

    Ismanto, A. W.; Kusuma, H. S.; Mahfud, M.

    2017-12-01

    The comparison of solvent-free microwave extraction (SFME) and microwave hydrodistillation (MHD) in the extraction of essential oil from Melaleuca leucadendra Linn. was examined. Dry cajuput leaves were used in this study. The purpose of this study is also to determine optimal condition (microwave power). The relative electric consumption of SFME and MHD methods are both showing 0,1627 kWh/g and 0,3279 kWh/g. The results showed that solvent-free microwave extraction methods able to reduce energy consumption and can be regarded as a green technique for extraction of cajuput oil.

  15. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.

  16. A PLS-based extractive spectrophotometric method for simultaneous determination of carbamazepine and carbamazepine-10,11-epoxide in plasma and comparison with HPLC

    NASA Astrophysics Data System (ADS)

    Hemmateenejad, Bahram; Rezaei, Zahra; Khabnadideh, Soghra; Saffari, Maryam

    2007-11-01

    Carbamazepine (CBZ) undergoes enzyme biotransformation through epoxidation with the formation of its metabolite, carbamazepine-10,11-epoxide (CBZE). A simple chemometrics-assisted spectrophotometric method has been proposed for simultaneous determination of CBZ and CBZE in plasma. A liquid extraction procedure was operated to separate the analytes from plasma, and the UV absorbance spectra of the resultant solutions were subjected to partial least squares (PLS) regression. The optimum number of PLS latent variables was selected according to the PRESS values of leave-one-out cross-validation. A HPLC method was also employed for comparison. The respective mean recoveries for analysis of CBZ and CBZE in synthetic mixtures were 102.57 (±0.25)% and 103.00 (±0.09)% for PLS and 99.40 (±0.15)% and 102.20 (±0.02)%. The concentrations of CBZ and CBZE were also determined in five patients using the PLS and HPLC methods. The results showed that the data obtained by PLS were comparable with those obtained by HPLC method.

  17. Histological Methods for ex vivo Axon Tracing: A Systematic Review

    PubMed Central

    Heilingoetter, Cassandra L.; Jensen, Matthew B.

    2016-01-01

    Objectives Axon tracers provide crucial insight into the development, connectivity, and function of neural pathways. A tracer can be characterized as a substance that allows for the visualization of a neuronal pathway. Axon tracers have previously been used exclusively with in vivo studies; however, newer methods of axon tracing can be applied to ex vivo studies. Ex vivo studies involve the examination of cells or tissues retrieved from an organism. These post mortem methods of axon tracing offer several advantages, such as reaching inaccessible tissues and avoiding survival surgeries. Methods In order to evaluate the quality of the ex vivo tracing methods, we performed a systematic review of various experimental and comparison studies to discern the optimal method of axon tracing. Results The most prominent methods for ex vivo tracing involve enzymatic techniques or various dyes. It appears that there are a variety of techniques and conditions that tend to give better fluorescent character, clarity, and distance traveled in the neuronal pathway. We found direct comparison studies that looked at variables such as the type of tracer, time required, effect of temperature, and presence of calcium, however, there are other variables that have not been compared directly. Discussion We conclude there are a variety of promising tracing methods available depending on the experimental goals of the researcher, however, more direct comparison studies are needed to affirm the optimal method. PMID:27098542

  18. Modification of a successive corrections objective analysis for improved higher order calculations

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.

    1988-01-01

    The use of objectively analyzed fields of meteorological data for the initialization of numerical prediction models and for complex diagnostic studies places the requirements upon the objective method that derivatives of the gridded fields be accurate and free from interpolation error. A modification was proposed for an objective analysis developed by Barnes that provides improvements in analysis of both the field and its derivatives. Theoretical comparisons, comparisons between analyses of analytical monochromatic waves, and comparisons between analyses of actual weather data are used to show the potential of the new method. The new method restores more of the amplitudes of desired wavelengths while simultaneously filtering more of the amplitudes of undesired wavelengths. These results also hold for the first and second derivatives calculated from the gridded fields. Greatest improvements were for the Laplacian of the height field; the new method reduced the variance of undesirable very short wavelengths by 72 percent. Other improvements were found in the divergence of the gridded wind field and near the boundaries of the field of data.

  19. Indirect scaling methods for testing quantitative emotion theories.

    PubMed

    Junge, Martin; Reisenzein, Rainer

    2013-01-01

    Two studies investigated the utility of indirect scaling methods, based on graded pair comparisons, for the testing of quantitative emotion theories. In Study 1, we measured the intensity of relief and disappointment caused by lottery outcomes, and in Study 2, the intensity of disgust evoked by pictures, using both direct intensity ratings and graded pair comparisons. The stimuli were systematically constructed to reflect variables expected to influence the intensity of the emotions according to theoretical models of relief/disappointment and disgust, respectively. Two probabilistic scaling methods were used to estimate scale values from the pair comparison judgements: Additive functional measurement (AFM) and maximum likelihood difference scaling (MLDS). The emotion models were fitted to the direct and indirect intensity measurements using nonlinear regression (Study 1) and analysis of variance (Study 2). Both studies found substantially improved fits of the emotion models for the indirectly determined emotion intensities, with their advantage being evident particularly at the level of individual participants. The results suggest that indirect scaling methods yield more precise measurements of emotion intensity than rating scales and thereby provide stronger tests of emotion theories in general and quantitative emotion theories in particular.

  20. An unusual method of forensic human identification: use of selfie photographs.

    PubMed

    Miranda, Geraldo Elias; Freitas, Sílvia Guzella de; Maia, Luiza Valéria de Abreu; Melani, Rodolfo Francisco Haltenhoff

    2016-06-01

    As with other methods of identification, in forensic odontology, antemortem data are compared with postmortem findings. In the absence of dental documentation, photographs of the smile play an important role in this comparison. As yet, there are no reports of the use of the selfie photograph for identification purposes. Owing to advancements in technology, electronic devices, and social networks, this type of photograph has become increasingly common. This paper describes a case in which selfie photographs were used to identify a carbonized body, by using the smile line and image superimposition. This low-cost, rapid, and easy to analyze technique provides highly reliable results. Nevertheless, there are disadvantages, such as the limited number of teeth that are visible in a photograph, low image quality, possibility of morphological changes in the teeth after the antemortem image was taken, and difficulty of making comparisons depending on the orientation of the photo. In forensic odontology, new methods of identification must be sought to accompany technological evolution, particularly when no traditional methods of comparison, such as clinical record charts or radiographs, are available. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Top-of-atmosphere radiative fluxes - Validation of ERBE scanner inversion algorithm using Nimbus-7 ERB data

    NASA Technical Reports Server (NTRS)

    Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri

    1992-01-01

    The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.

  2. InSAR Tropospheric Correction Methods: A Statistical Comparison over Different Regions

    NASA Astrophysics Data System (ADS)

    Bekaert, D. P.; Walters, R. J.; Wright, T. J.; Hooper, A. J.; Parker, D. J.

    2015-12-01

    Observing small magnitude surface displacements through InSAR is highly challenging, and requires advanced correction techniques to reduce noise. In fact, one of the largest obstacles facing the InSAR community is related to tropospheric noise correction. Spatial and temporal variations in temperature, pressure, and relative humidity result in a spatially-variable InSAR tropospheric signal, which masks smaller surface displacements due to tectonic or volcanic deformation. Correction methods applied today include those relying on weather model data, GNSS and/or spectrometer data. Unfortunately, these methods are often limited by the spatial and temporal resolution of the auxiliary data. Alternatively a correction can be estimated from the high-resolution interferometric phase by assuming a linear or a power-law relationship between the phase and topography. For these methods, the challenge lies in separating deformation from tropospheric signals. We will present results of a statistical comparison of the state-of-the-art tropospheric corrections estimated from spectrometer products (MERIS and MODIS), a low and high spatial-resolution weather model (ERA-I and WRF), and both the conventional linear and power-law empirical methods. We evaluate the correction capability over Southern Mexico, Italy, and El Hierro, and investigate the impact of increasing cloud cover on the accuracy of the tropospheric delay estimation. We find that each method has its strengths and weaknesses, and suggest that further developments should aim to combine different correction methods. All the presented methods are included into our new open source software package called TRAIN - Toolbox for Reducing Atmospheric InSAR Noise (Bekaert et al., in review), which is available to the community Bekaert, D., R. Walters, T. Wright, A. Hooper, and D. Parker (in review), Statistical comparison of InSAR tropospheric correction techniques, Remote Sensing of Environment

  3. Comparison of three noninvasive methods for hemoglobin screening of blood donors.

    PubMed

    Ardin, Sergey; Störmer, Melanie; Radojska, Stela; Oustianskaia, Larissa; Hahn, Moritz; Gathof, Birgit S

    2015-02-01

    To prevent phlebotomy of anemic individuals and to ensure hemoglobin (Hb) content of the blood units, Hb screening of blood donors before donation is essential. Hb values are mostly evaluated by measurement of capillary blood obtained from fingerstick. Rapid noninvasive methods have recently become available and may be preferred by donors and staff. The aim of this study was to evaluate for the first time all different noninvasive methods for Hb screening. Blood donors were screened for Hb levels in three different trials using three different noninvasive methods (Haemospect [MBR Optical Systems GmbH & Co. KG], NBM 200 [LMB Technology GmbH], Pronto-7 [Masimo Europe Ltd]) in comparison to the established fingerstick method (CompoLab Hb [Fresenius Kabi GmbH]) and to levels obtained from venous samples on a cell counter (Sysmex [Sysmex Europe GmbH]) as reference. The usability of the noninvasive methods was assessed with an especially developed survey. Technical failures occurred by using the Pronto-7 due to nail polish, skin color, or ambient light. The NBM 200 also showed a high sensitivity to ambient light and noticeably lower Hb levels for women than obtained from the Sysmex. The statistical analysis showed the following bias and standard deviation of differences of all methods in comparison to the venous results: Haemospect, -0.22 ± 1.24; NBM, 200 -0.12 ± 1.14; Pronto-7, -0.50 ± 0.99; and CompoLab Hb, -0.53 ± 0.81. Noninvasive Hb tests represent an attractive alternative by eliminating pain and reducing risks of blood contamination. The main problem for generating reliable results seems to be preanalytical variability in sampling. Despite the sensitivity to environmental stress, all methods are suitable for Hb measurement. © 2014 AABB.

  4. A Study of Flow Separation in Transonic Flow Using Inviscid and Viscous Computational Fluid Dynamics (CFD) Schemes

    NASA Technical Reports Server (NTRS)

    Rhodes, J. A.; Tiwari, S. N.; Vonlavante, E.

    1988-01-01

    A comparison of flow separation in transonic flows is made using various computational schemes which solve the Euler and the Navier-Stokes equations of fluid mechanics. The flows examined are computed using several simple two-dimensional configurations including a backward facing step and a bump in a channel. Comparison of the results obtained using shock fitting and flux vector splitting methods are presented and the results obtained using the Euler codes are compared to results on the same configurations using a code which solves the Navier-Stokes equations.

  5. Ground shake test of the UH-60A helicopter airframe and comparison with NASTRAN finite element model predictions

    NASA Technical Reports Server (NTRS)

    Howland, G. R.; Durno, J. A.; Twomey, W. J.

    1990-01-01

    Sikorsky Aircraft, together with the other major helicopter airframe manufacturers, is engaged in a study to improve the use of finite element analysis to predict the dynamic behavior of helicopter airframes, under a rotorcraft structural dynamics program called DAMVIBS (Design Analysis Methods for VIBrationS), sponsored by the NASA-Langley. The test plan and test results are presented for a shake test of the UH-60A BLACK HAWK helicopter. A comparison is also presented of test results with results obtained from analysis using a NASTRAN finite element model.

  6. Sensitivity and comparison evaluation of Saturn 5 liquid penetrants

    NASA Technical Reports Server (NTRS)

    Jones, G. H.

    1973-01-01

    Results of a sensitivity and comparison evaluation performed on six liquid penetrants that were used on the Saturn 5 vehicle and other space hardware to detect surface discontinuities are described. The relationship between penetrant materials and crack definition capabilities, the optimum penetrant materials evaluation method, and the optimum measurement methods for crack dimensions were investigated. A unique method of precise developer thickness control was envolved, utilizing clear radiographic film and a densitometer. The method of evaluation included five aluminum alloy, 2219-T87, specimens that were heated and then quenched in cold water to produce cracks. The six penetrants were then applied, one at a time, and the crack indications were counted and recorded for each penetrant for comparison purposes. Measurements were made by determining the visual crack indications per linear inch and then sectioning the specimens for a metallographic count of the cracks present. This method provided a numerical approach for assigning a sensitivity index number to the penetrants. Of the six penetrants evaluated, two were not satisfactory (one was not sufficiently sensitive and the other was to sensitive, giving false indications). The other four were satisfactory with approximately the same sensitivity in the range of 78 to 80.5 percent of total cracks detected.

  7. Fetal growth and psychiatric and socioeconomic problems: population-based sibling comparison

    PubMed Central

    Class, Quetzal A.; Rickert, Martin E.; Larsson, Henrik; Lichtenstein, Paul; D’Onofrio, Brian M.

    2014-01-01

    Background It is unclear whether associations between fetal growth and psychiatric and socioeconomic problems are consistent with causal mechanisms. Aims To estimate the extent to which associations are a result of unmeasured confounding factors using a sibling-comparison approach. Method We predicted outcomes from continuously measured birth weight in a Swedish population cohort (n = 3 291 773), while controlling for measured and unmeasured confounding. Results In the population, lower birth weight (⩽2500 g) increased the risk of all outcomes. Sibling-comparison models indicated that lower birth weight independently predicted increased risk for autism spectrum disorder (hazard ratio for low birth weight = 2.44, 95% CI 1.99-2.97) and attention-deficit hyperactivity disorder. Although attenuated, associations remained for psychotic or bipolar disorder and educational problems. Associations with suicide attempt, substance use problems and social welfare receipt, however, were fully attenuated in sibling comparisons. Conclusions Results suggest that fetal growth, and factors that influence it, contribute to psychiatric and socioeconomic problems. PMID:25257067

  8. Time evolution of an SLR reference frame

    NASA Astrophysics Data System (ADS)

    Angermann, D.; Gerstl, M.; Kelm, R.; Müller, H.; Seemüller, W.; Vei, M.

    2002-07-01

    On the basis of LAGEOS-1 and LAGEOS-2 data we computed a 10-years (1990-2000) solution for SLR station positions and velocities. The paper describes the data processing with the DGFI software package DOGS. We present results for station coordinates and their time variation for 41 stations of the global SLR network, and discuss the stability and time evolution of the SLR reference frame established in the same way. We applied different methods to assess the quality and consistency of the SLR results. The results presented in this paper include: (1) a time series of weekly estimated station coordinates; (2) a comparison of a 10-year LAGEOS-1 and LAGEOS-2 solution; (3) a comparison of 2.5-year solutions with the combined 10-year solution to assess the internal stability and the time evolution of the SLR reference frame; (4) a comparison of the SLR reference frame with ITRF97; and (5) a comparison of SLR station velocities with those of ITRF97 and NNR NUVEL-1A.

  9. Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper

    NASA Astrophysics Data System (ADS)

    Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.

    2017-04-01

    This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.

  10. The Isprs Benchmark on Indoor Modelling

    NASA Astrophysics Data System (ADS)

    Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.

    2017-09-01

    Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.

  11. Ab initio analytical Raman intensities for periodic systems through a coupled perturbed Hartree-Fock/Kohn-Sham method in an atomic orbital basis. II. Validation and comparison with experiments

    NASA Astrophysics Data System (ADS)

    Maschio, Lorenzo; Kirtman, Bernard; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto

    2013-10-01

    In this work, we validate a new, fully analytical method for calculating Raman intensities of periodic systems, developed and presented in Paper I [L. Maschio, B. Kirtman, M. Rérat, R. Orlando, and R. Dovesi, J. Chem. Phys. 139, 164101 (2013)]. Our validation of this method and its implementation in the CRYSTAL code is done through several internal checks as well as comparison with experiment. The internal checks include consistency of results when increasing the number of periodic directions (from 0D to 1D, 2D, 3D), comparison with numerical differentiation, and a test of the sum rule for derivatives of the polarizability tensor. The choice of basis set as well as the Hamiltonian is also studied. Simulated Raman spectra of α-quartz and of the UiO-66 Metal-Organic Framework are compared with the experimental data.

  12. [Digital vs. analog hearing aids for children. Is there a method for making an objective comparison possible?].

    PubMed

    Prinz, I; Nubel, K; Gross, M

    2002-09-01

    Until now, the assumed benefits of digital hearing aids are reflected only in subjective descriptions by patients with hearing aids, but cannot be documented adequately by routine diagnostic methods. Seventeen schoolchildren with moderate severe bilateral symmetrical sensorineural hearing loss were examined in a double-blinded crossover study. Differences in performance between a fully digital hearing aid (DigiFocus compact/Oticon) and an analogous digitally programmable two-channel hearing aid were evaluated. Of the 17 children, 13 choose the digital and 4 the analogous hearing aid. In contrast to the clear subjective preferences for the fully digital hearing aid, we could not obtain any significant results with routine diagnostic methods. Using the "virtual hearing aid," a subjective comparison and speech recognition performance task yielded significant differences. The virtual hearing aid proved to be suitable for a direct comparison of different hearing aids and can be used for double-blind testing in a pediatric population.

  13. Batchwise dyeing of bamboo cellulose fabric with reactive dye using ultrasonic energy.

    PubMed

    Larik, Safdar Ali; Khatri, Awais; Ali, Shamshad; Kim, Seong Hun

    2015-05-01

    Bamboo is a regenerated cellulose fiber usually dyed with reactive dyes. This paper presents results of the batchwise dyeing of bamboo fabric with reactive dyes by ultrasonic (US) and conventional (CN) dyeing methods. The study was focused at comparing the two methods for dyeing results, chemicals, temperature and time, and effluent quality. Two widely used dyes, CI Reactive Black 5 (bis-sulphatoethylsulphone) and CI Reactive Red 147 (difluorochloropyrimidine) were used in the study. The US dyeing method produced around 5-6% higher color yield (K/S) in comparison to the CN dyeing method. A significant savings in terms of fixation temperature (10°C) and time (15 min), and amounts of salt (10 g/L) and alkali (0.5-1% on mass of fiber) was realized. Moreover, the dyeing effluent showed considerable reductions in the total dissolved solids content (minimum around 29%) and in the chemical oxygen demand (minimum around 13%) for the US dyebath in comparison to the CN dyebath. The analysis of colorfastness tests demonstrated similar results by US and CN dyeing methods. A microscopic examination on the field emission scanning electron microscope revealed that the US energy did not alter the surface morphology of the bamboo fibers. It was concluded that the US dyeing of bamboo fabric produces better dyeing results and is a more economical and environmentally sustainable method as compared to CN dyeing method. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments

    PubMed Central

    Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed

    2013-01-01

    In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135

  15. Measurement Equivalence in ADL and IADL Difficulty Across International Surveys of Aging: Findings From the HRS, SHARE, and ELSA

    PubMed Central

    Kasper, Judith D.; Brandt, Jason; Pezzin, Liliana E.

    2012-01-01

    Objective. To examine the measurement equivalence of items on disability across three international surveys of aging. Method. Data for persons aged 65 and older were drawn from the Health and Retirement Survey (HRS, n = 10,905), English Longitudinal Study of Aging (ELSA, n = 5,437), and Survey of Health, Ageing and Retirement in Europe (SHARE, n = 13,408). Differential item functioning (DIF) was assessed using item response theory (IRT) methods for activities of daily living (ADL) and instrumental activities of daily living (IADL) items. Results. HRS and SHARE exhibited measurement equivalence, but 6 of 11 items in ELSA demonstrated meaningful DIF. At the scale level, this item-level DIF affected scores reflecting greater disability. IRT methods also spread out score distributions and shifted scores higher (toward greater disability). Results for mean disability differences by demographic characteristics, using original and DIF-adjusted scores, were the same overall but differed for some subgroup comparisons involving ELSA. Discussion. Testing and adjusting for DIF is one means of minimizing measurement error in cross-national survey comparisons. IRT methods were used to evaluate potential measurement bias in disability comparisons across three international surveys of aging. The analysis also suggested DIF was mitigated for scales including both ADL and IADL and that summary indexes (counts of limitations) likely underestimate mean disability in these international populations. PMID:22156662

  16. A comparison of food crispness based on the cloud model.

    PubMed

    Wang, Minghui; Sun, Yonghai; Hou, Jumin; Wang, Xia; Bai, Xue; Wu, Chunhui; Yu, Libo; Yang, Jie

    2018-02-01

    The cloud model is a typical model which transforms the qualitative concept into the quantitative description. The cloud model has been used less extensively in texture studies before. The purpose of this study was to apply the cloud model in food crispness comparison. The acoustic signals of carrots, white radishes, potatoes, Fuji apples, and crystal pears were recorded during compression. And three time-domain signal characteristics were extracted, including sound intensity, maximum short-time frame energy, and waveform index. The three signal characteristics and the cloud model were used to compare the crispness of the samples mentioned above. The crispness based on the Ex value of the cloud model, in a descending order, was carrot > potato > white radish > Fuji apple > crystal pear. To verify the results of the acoustic signals, mechanical measurement and sensory evaluation were conducted. The results of the two verification experiments confirmed the feasibility of the cloud model. The microstructures of the five samples were also analyzed. The microstructure parameters were negatively related with crispness (p < .01). The cloud model method can be used for crispness comparison of different kinds of foods. The method is more accurate than the traditional methods such as mechanical measurement and sensory evaluation. The cloud model method can also be applied to other texture studies extensively. © 2017 Wiley Periodicals, Inc.

  17. Cost Comparison Model: Blended eLearning versus traditional training of community health workers

    PubMed Central

    Sissine, Mysha; Segan, Robert; Taylor, Mathew; Jefferson, Bobby; Borrelli, Alice; Koehler, Mohandas; Chelvayohan, Meena

    2014-01-01

    Objectives: Another one million community healthcare workers are needed to address the growing global population and increasing demand of health care services. This paper describes a cost comparison between two training approaches to better understand costs implications of training community health workers (CHWs) in Sub-Saharan Africa. Methods: Our team created a prospective model to forecast and compare the costs of two training methods as described in the Dalburge Report - (1) a traditional didactic training approach (“baseline”) and (2) a blended eLearning training approach (“blended”). After running the model for training 100,000 CHWs, we compared the results and scaled up those results to one million CHWs. Results: A substantial difference exists in total costs between the baseline and blended training programs. Results indicate that using a blended eLearning approach for training community health care workers could provide a total cost savings of 42%. Scaling the model to one million CHWs, the blended eLearning training approach reduces total costs by 25%. Discussion: The blended eLearning savings are a result of decreased classroom time, thereby reducing the costs associated with travel, trainers and classroom costs; and using a tablet with WiFi plus a feature phone rather than a smartphone with data plan. Conclusion: The results of this cost analysis indicate significant savings through using a blended eLearning approach in comparison to a traditional didactic method for CHW training by as much as 67%. These results correspond to the Dalberg publication which indicates that using a blended eLearning approach is an opportunity for closing the gap in training community health care workers. PMID:25598868

  18. EVALUATION OF METHODS FOR THE DETERMINATION OF DIESEL-GENERATED FINE PARTICULATE MATTER: PHYSICAL CHARACTERIZATION OF RESULTS

    EPA Science Inventory

    A multi-phase instrument comparison study was conducted on two different diesel engines on a dynamometer to compare commonly used particulate matter (PM) measurement techniques while sampling the same diesel exhaust aerosol and to evaluate inter- and intra-method variability. In...

  19. Stored grain pack factors for wheat: comparison of three methods to field measurements

    USDA-ARS?s Scientific Manuscript database

    Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...

  20. Comparisons and Analyses of Gifted Students' Characteristics and Learning Methods

    ERIC Educational Resources Information Center

    Lu, Jiamei; Li, Daqi; Stevens, Carla; Ye, Renmin

    2017-01-01

    Using PISA 2009, an international education database, this study compares gifted and talented (GT) students in three groups with normal (non-GT) students by examining student characteristics, reading, schooling, learning methods, and use of strategies for understanding and memorizing. Results indicate that the GT and non-GT gender distributions…

  1. (PRESENTED NAQC SAN FRANCISCO, CA) COARSE PM METHODS STUDY: STUDY DESIGN AND RESULTS

    EPA Science Inventory

    Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...

  2. Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.

    PubMed

    Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza

    2017-12-01

    Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.

  3. A Method for Determining the Rate of Heat Transfer from a Wing or Streamline Body

    NASA Technical Reports Server (NTRS)

    Frick, Charles W; Mccullough, George B

    1945-01-01

    A method for calculating the rate of heat transfer from the surface of an airfoil or streamline body is presented. A comparison with the results of an experimental investigation indicates that the accuracy of the method is good. This method may be used to calculate the heat supply necessary for heat de-icing or in ascertaining the heat loss from the fuselage of an aircraft operating at great altitude. To illustrate the method, the total rate of heat transfer from an airfoil is calculated and compared with the experimental results.

  4. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  5. A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae

    PubMed Central

    Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol; Scalcinati, Gionata; Fagerberg, Linn; Uhlén, Matthias; Nielsen, Jens

    2012-01-01

    RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the Illumina platform, and to perform a cross-platform comparison based on the results obtained through Affymetrix microarray. As a case study for our work we, used the Saccharomyces cerevisiae strain CEN.PK 113-7D, grown under two different conditions (batch and chemostat). Here, we asses the influence of genetic variation on the estimation of gene expression level using three different aligners for read-mapping (Gsnap, Stampy and TopHat) on S288c genome, the capabilities of five different statistical methods to detect differential gene expression (baySeq, Cuffdiff, DESeq, edgeR and NOISeq) and we explored the consistency between RNA-seq analysis using reference genome and de novo assembly approach. High reproducibility among biological replicates (correlation ≥0.99) and high consistency between the two platforms for analysis of gene expression levels (correlation ≥0.91) are reported. The results from differential gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays) for gene expression analysis and addresses the contribution of the different steps involved in the analysis of RNA-seq data. PMID:22965124

  6. Query3d: a new method for high-throughput analysis of functional residues in protein structures.

    PubMed

    Ausiello, Gabriele; Via, Allegra; Helmer-Citterich, Manuela

    2005-12-01

    The identification of local similarities between two protein structures can provide clues of a common function. Many different methods exist for searching for similar subsets of residues in proteins of known structure. However, the lack of functional and structural information on single residues, together with the low level of integration of this information in comparison methods, is a limitation that prevents these methods from being fully exploited in high-throughput analyses. Here we describe Query3d, a program that is both a structural DBMS (Database Management System) and a local comparison method. The method conserves a copy of all the residues of the Protein Data Bank annotated with a variety of functional and structural information. New annotations can be easily added from a variety of methods and known databases. The algorithm makes it possible to create complex queries based on the residues' function and then to compare only subsets of the selected residues. Functional information is also essential to speed up the comparison and the analysis of the results. With Query3d, users can easily obtain statistics on how many and which residues share certain properties in all proteins of known structure. At the same time, the method also finds their structural neighbours in the whole PDB. Programs and data can be accessed through the PdbFun web interface.

  7. A proposal to standardize reporting units for fecal immunochemical tests for hemoglobin.

    PubMed

    Fraser, Callum G; Allison, James E; Halloran, Stephen P; Young, Graeme P

    2012-06-06

    Fecal immunochemical tests for hemoglobin are replacing traditional guaiac fecal occult blood tests in population screening programs for many reasons. However, the many available fecal immunochemical test devices use a range of sampling methods, differ with regard to hemoglobin stability, and report hemoglobin concentrations in different ways. The methods for sampling, the mass of feces collected, and the volume and characteristics of the buffer used in the sampling device also vary among fecal immunochemical tests, making comparisons of test performance characteristics difficult. Fecal immunochemical test results may be expressed as the hemoglobin concentration in the sampling device buffer and, sometimes, albeit rarely, as the hemoglobin concentration per mass of feces. The current lack of consistency in units for reporting hemoglobin concentration is particularly problematic because apparently similar hemoglobin concentrations obtained with different devices can lead to very different clinical interpretations. Consistent adoption of an internationally accepted method for reporting results would facilitate comparisons of outcomes from these tests. We propose a simple strategy for reporting fecal hemoglobin concentration that will facilitate the comparison of results between fecal immunochemical test devices and across clinical studies. Such reporting is readily achieved by defining the mass of feces sampled and the volume of sample buffer (with confidence intervals) and expressing results as micrograms of hemoglobin per gram of feces. We propose that manufacturers of fecal immunochemical tests provide this information and that the authors of research articles, guidelines, and policy articles, as well as pathology services and regulatory bodies, adopt this metric when reporting fecal immunochemical test results.

  8. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  9. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B.G.

    1999-11-11

    The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.

  10. Modelling crystal growth: Convection in an asymmetrically heated ampoule

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Rosenberger, Franz; Pulicani, J. P.; Krukowski, S.; Ouazzani, Jalil

    1990-01-01

    The objective was to develop and implement a numerical method capable of solving the nonlinear partial differential equations governing heat, mass, and momentum transfer in a 3-D cylindrical geometry in order to examine the character of convection in an asymmetrically heated cylindrical ampoule. The details of the numerical method, including verification tests involving comparison with results obtained from other methods, are presented. The results of the study of 3-D convection in an asymmetrically heated cylinder are described.

  11. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  12. The ShakeOut earthquake scenario: Verification of three simulation sets

    USGS Publications Warehouse

    Bielak, J.; Graves, R.W.; Olsen, K.B.; Taborda, R.; Ramirez-Guzman, L.; Day, S.M.; Ely, G.P.; Roten, D.; Jordan, T.H.; Maechling, P.J.; Urbanic, J.; Cui, Y.; Juve, G.

    2010-01-01

    This paper presents a verification of three simulations of the ShakeOut scenario, an Mw 7.8 earthquake on a portion of the San Andreas fault in southern California, conducted by three different groups at the Southern California Earthquake Center using the SCEC Community Velocity Model for this region. We conducted two simulations using the finite difference method, and one by the finite element method, and performed qualitative and quantitative comparisons between the corresponding results. The results are in good agreement with each other; only small differences occur both in amplitude and phase between the various synthetics at ten observation points located near and away from the fault-as far as 150 km away from the fault. Using an available goodness-of-fit criterion all the comparisons scored above 8, with most above 9.2. This score would be regarded as excellent if the measurements were between recorded and synthetic seismograms. We also report results of comparisons based on time-frequency misfit criteria. Results from these two criteria can be used for calibrating the two methods for comparing seismograms. In those cases in which noticeable discrepancies occurred between the seismograms generated by the three groups, we found that they were the product of inherent characteristics of the various numerical methods used and their implementations. In particular, we found that the major source of discrepancy lies in the difference between mesh and grid representations of the same material model. Overall, however, even the largest differences in the synthetic seismograms are small. Thus, given the complexity of the simulations used in this verification, it appears that the three schemes are consistent, reliable and sufficiently accurate and robust for use in future large-scale simulations. ?? 2009 The Authors Journal compilation ?? 2009 RAS.

  13. Key comparison SIM.EM.RF-K5b.CL: scattering coefficients by broad-band methods, 2 GHz-18 GHz — type N connector

    NASA Astrophysics Data System (ADS)

    Silva, H.; Monasterios, G.

    2016-01-01

    The first key comparison in microwave frequencies within the SIM (Sistema Interamericano de Metrología) region has been carried out. The measurands were the S-parameters of 50 ohm coaxial devices with Type-N connectors and were measured at 2 GHz, 9 GHz and 18 GHz. SIM.EM.RF-K5b.CL was the identification assigned and it was based on a parent CCEM key comparison named CCEM.RF-K5b.CL. For this reason, the measurements standards and their nominal values were selected accordingly, i.e. two one-port devices (a matched and a mismatched load) to cover low and high reflection coefficients and two attenuators (3dB and 20 dB) to cover low and high transmission coefficients. This key comparison has met the need for ensuring traceability in high-frequency measurements across America by linking SIM's results to CCEM. Six NMIs have participated in this comparison which was piloted by the Instituto Nacional de Tecnología Industrial (Argentina). A linking method of multivariate values was proposed and implemented in order to allow the linking of 2-dimensional results. KEY WORDS FOR SEARCH Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  14. Comparison of the quantitative dry culture methods with both conventional media and most probable number method for the enumeration of coliforms and Escherichia coli/coliforms in food.

    PubMed

    Teramura, H; Sota, K; Iwasaki, M; Ogihara, H

    2017-07-01

    Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only to be useful alternatives but also to contribute for accurate enumeration of these bacteria in a variety of foods, and specifically in fermented foods. © 2017 The Society for Applied Microbiology.

  15. Comparison of endotracheal aspirate and non-bronchoscopic bronchoalveolar lavage in the diagnosis of ventilator-associated pneumonia in a pediatric intensive care unit.

    PubMed

    Yıldız-Atıkan, Başak; Karapınar, Bülent; Aydemir, Şöhret; Vardar, Fadıl

    2015-01-01

    Ventilator-associated pneumonia (VAP) is defined as pneumonia occuring in any period of mechanical ventilation. There is no optimal diagnostic method in current use and in this study we aimed to compare two non-invasive diagnostic methods used in diagnosis of VAP in children. This prospective study was conducted in 8 bedded Pediatric Intensive Care Unit at Ege University Children´s Hospital. Endotracheal aspiration (ETA) and non-bronchoscopic bronchoalveolar lavage (BAL) were performed in case of developing VIP after 48 hours of ventilation. Quantitative cultures were examined in Ege University Department of Diagnostic Microbiology, Bacteriology Laboratory. Fourty-one patients were enrolled in the study. The mean age of study subjects was 47.2±53.6 months. A total of 28 in 82 specimens taken with both methods were negative/negative; 28 had positive result with ETA and a negative result with non-bronchoscopic BAL and both results were negative in 26 specimens. There were no patients whose respiratory specimen culture was negative with ETA and positive with non-bronchoscopic BAL. These results imply that there is a significant difference between two diagnostic methods (p < 0.001). Negative non-bronchoscopic BAL results are recognized as absence of VAP; therefore, ETA results were compared with this method. ETA's sensitivity, specificity, negative and positive predictive values were 100%, 50%, 100% and 48% respectively. The study revealed the ease of usability and the sensitivity of non-bronchoscopic BAL, in comparison with ETA.

  16. Comparison of brown sugar, hot water, and salt methods for detecting western cherry fruit fly (Diptera: Tephritidae) larvae in sweet cherry

    USDA-ARS?s Scientific Manuscript database

    Brown sugar or hot water methods have been developed to detect larvae of tephritid fruit flies in post-harvest fruit in order to maintain quarantine security. It would be useful to determine if variations of these methods can yield better results and if less expensive alternatives exist. This stud...

  17. A Comparison of Neural Networks and Fuzzy Logic Methods for Process Modeling

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Sala, Dorel M.; Berke, Laszlo

    1996-01-01

    The goal of this work was to analyze the potential of neural networks and fuzzy logic methods to develop approximate response surfaces as process modeling, that is for mapping of input into output. Structural response was chosen as an example. Each of the many methods surveyed are explained and the results are presented. Future research directions are also discussed.

  18. Comparison of Transmission Line Methods for Surface Acoustic Wave Modeling

    NASA Technical Reports Server (NTRS)

    Wilson, William; Atkinson, Gary

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

  19. One-dimensional stitching interferometry assisted by a triple-beam interferometer

    DOE PAGES

    Xue, Junpeng; Huang, Lei; Gao, Bo; ...

    2017-04-13

    In this work, we proposed for stitching interferometry to use a triple-beam interferometer to measure both the distance and the tilt for all sub-apertures before the stitching process. The relative piston between two neighboring sub-apertures is then calculated by using the data in the overlapping area. Comparisons are made between our method, and the classical least-squares principle stitching method. Our method can improve the accuracy and repeatability of the classical stitching method when a large number of sub-aperture topographies are taken into account. Our simulations and experiments on flat and spherical mirrors indicate that our proposed method can decrease themore » influence of the interferometer error from the stitched result. The comparison of stitching system with Fizeau interferometry data is about 2 nm root mean squares and the repeatability is within ± 2.5 nm peak to valley.« less

  20. A comparison between GO/aperture-field and physical-optics methods for offset reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1984-01-01

    Both geometrical optics (GO)/aperture-field and physical-optics (PO) methods are used extensively in the diffraction analysis of offset parabolic and dual reflectors. An analytical/numerical comparative study is performed to demonstrate the limitations of the GO/aperture-field method for accurately predicting the sidelobe and null positions and levels. In particular, it is shown that for offset parabolic reflectors and for feeds located at the focal point, the predicted far-field patterns (amplitude) by the GO/aperture-field method will always be symmetric even in the offset plane. This, of course, is inaccurate for the general case and it is shown that the physical-optics method can result in asymmetric patterns for cases in which the feed is located at the focal point. Representative numerical data are presented and a comparison is made with available measured data.

  1. A Comparison of Methods Used to Estimate the Height of Sand Dunes on Mars

    NASA Technical Reports Server (NTRS)

    Bourke, M. C.; Balme, M.; Beyer, R. A.; Williams, K. K.; Zimbelman, J.

    2006-01-01

    The collection of morphometric data on small-scale landforms from other planetary bodies is difficult. We assess four methods that can be used to estimate the height of aeolian dunes on Mars. These are (1) stereography, (2) slip face length, (3) profiling photoclinometry, and (4) Mars Orbiter Laser Altimeter (MOLA). Results show that there is good agreement among the methods when conditions are ideal. However, limitations inherent to each method inhibited their accurate application to all sites. Collectively, these techniques provide data on a range of morphometric parameters, some of which were not previously available for dunes on Mars. They include dune height, width, length, surface area, volume, and longitudinal and transverse profiles. Thc utilization of these methods will facilitate a more accurate analysis of aeolian dunes on Mars and enable comparison with dunes on other planetary surfaces.

  2. A novel method for the determination of chemical purity and assay of menaquinone-7. Comparison with the methods from the official USP monograph.

    PubMed

    Jedynak, Łukasz; Jedynak, Maria; Kossykowska, Magdalena; Zagrodzka, Joanna

    2017-02-20

    An HPLC method with UV detection and separation with the use of a C30 reversed phase analytical column for the determination of chemical purity and assay of menaquinone-7 (MK7) in one chromatographic run was developed. The method is superior to the methods published in the USP Monograph in terms of selectivity, sensitivity and accuracy, as well as time, solvent and sample consumption. The developed methodology was applied to MK7 samples of active pharmaceutical ingredient (API) purity, MK7 samples of lower quality and crude MK7 samples before purification. The comparison of the results revealed that the use of USP methodology could lead to serious overestimation (up to a few percent) of both purity and MK7 assay in menaquinone-7 samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. The composition of M-type asteroids: Synthesis of spectroscopic and radar observations

    NASA Astrophysics Data System (ADS)

    Neeley, J. R.; Ockert-Bell, M. E.; Clark, B. E.; Shepard, M. K.; Cloutis, E. A.; Fornasier, S.; Bus, S. J.

    2011-10-01

    This work updates our and expands our long term radar-driven observational campaign of 27 main-belt asteroids (MBAs) focused on Bus-DeMeo Xc- and Xk-type objects (Tholen X and M class asteroids) using the Arecibo radar and NASA Infrared Telescope Facilities (IRTF). Seventeen of our targets were near-simultaneously observed with radar and those observations are described in companion paper (Shepard et al., 2010). We utilized visible wavelength for a more complete compositional analysis of our targets. Compositional evidence is derived from our target asteroid spectra using three different methods: 1) a χ2 search for spectral matches in the RELAB database, 2) parametric comparisons with meteorites and 3) linear discriminant analysis. This paper synthesizes the results of the RELAB search, parametric comparisons, and linear discriminant analysis with compositional suggestions based on radar observations. We find that for six of seventeen targets with radar data, our spectral results are consistent with their radar analog (16 Psyche, 21 Lutetia, 69 Hesperia, 135 Hertha, 216 Kleopatra, and 497 Iva). For twenty out of twenty-seven objects our statistical comparisons with RELAB meteorites result in consistent analog identification, providing a degree of confidence in our parametric methods.

  4. Faecal indicator bacteria enumeration in beach sand: a comparison study of extraction methods in medium to coarse sands.

    PubMed

    Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B

    2009-11-01

    The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.

  5. Antifungal Susceptibility Testing in HIV/AIDS Patients: a Comparison Between Automated Machine and Manual Method.

    PubMed

    Nelwan, Erni J; Indrasanti, Evi; Sinto, Robert; Nurchaida, Farida; Sosrosumihardjo, Rustadi

    2016-01-01

    to evaluate the performance of Vitek2 compact machine (Biomerieux Inc. ver 04.02, France) in reference to manual methods for susceptibility test for Candida resistance among HIV/AIDS patients. a comparison study to evaluate Vitek2 compact machine (Biomerieux Inc. ver 04.02, France) in reference to manual methods for susceptibility test for Candida resistance among HIV/AIDS patient was done. Categorical agreement between manual disc diffusion and Vitek2 machine was calculated using predefined criteria. Time to susceptibility result for automated and manual methods were measured. there were 137 Candida isolates comprising eight Candida species with C.albicans and C. glabrata as the first (56.2%) and second (15.3%) most common species, respectively. For fluconazole drug, among the C. albicans, 2.6% was found resistant on manual disc diffusion methods and no resistant was determined by Vitek2 machine; whereas 100% C. krusei was identified as resistant on both methods. Resistant patterns for C. glabrata to fluconazole, voriconazole and amphotericin B were 52.4%, 23.8%, 23.8% vs. 9.5%, 9.5%, 4.8% respectively between manual diffusion disc methods and Vitek2 machine. Time to susceptibility result for automated methods compared to Vitex2 machine was shorter for all Candida species. there is a good categorical agreement between manual disc diffusion and Vitek2 machine, except for C. glabrata for measuring the antifungal resistant. Time to susceptibility result for automated methods is shorter for all Candida species.

  6. Molecular characterization and comparison of shale oils generated by different pyrolysis methods using FT-ICR mass spectrometry

    USGS Publications Warehouse

    Jin, J.M.; Kim, S.; Birdwell, J.E.

    2011-01-01

    Fourier transform ion cyclotron resonance mass spectrometry (FT ICR-MS) was applied in the analysis of shale oils generated using two different pyrolysis systems under laboratory conditions meant to simulate surface and in situ oil shale retorting. Significant variations were observed in the shale oils, particularly the degree of conjugation of the constituent molecules. Comparison of FT ICR-MS results to standard oil characterization methods (API gravity, SARA fractionation, gas chromatography-flame ionization detection) indicated correspondence between the average Double Bond Equivalence (DBE) and asphaltene content. The results show that, based on the average DBE values and DBE distributions of the shale oils examined, highly conjugated species are enriched in samples produced under low pressure, high temperature conditions and in the presence of water.

  7. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  8. AC Power at Power Frequencies: Bilateral comparison between SASO NMCC and TÜBİTAK UME

    NASA Astrophysics Data System (ADS)

    Çaycı, Hüseyin; Yılmaz, Özlem; AlRobaish, Abdullah M.; AlAnazi, Shafi S.; AlAyali, Ahmed R.; AlRumie, Rashed A.

    2018-01-01

    A supplementary bilateral comparison measurement on AC Power at 50/60 Hz between SASO NMCC (GULFMET) and TÜBİTAK UME (EURAMET) was performed with the primary power standards of each partner. Measurement methods and setups which are very similar of the participants, measurement results, calculation of differences in the results, evaluation of uncertainties are given within this report. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  9. Hand-held survey probe

    DOEpatents

    Young, Kevin L [Idaho Falls, ID; Hungate, Kevin E [Idaho Falls, ID

    2010-02-23

    A system for providing operational feedback to a user of a detection probe may include an optical sensor to generate data corresponding to a position of the detection probe with respect to a surface; a microprocessor to receive the data; a software medium having code to process the data with the microprocessor and pre-programmed parameters, and making a comparison of the data to the parameters; and an indicator device to indicate results of the comparison. A method of providing operational feedback to a user of a detection probe may include generating output data with an optical sensor corresponding to the relative position with respect to a surface; processing the output data, including comparing the output data to pre-programmed parameters; and indicating results of the comparison.

  10. Comparison of microstickies measurement methods. Part II, Results and discussion

    Treesearch

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 °C to 65 °C in other cases. Sample size ranges from as low as...

  11. A Comparison of PSD Enveloping Methods for Nonstationary Vibration

    NASA Technical Reports Server (NTRS)

    Irvine, Tom

    2015-01-01

    There is a need to derive a power spectral density (PSD) envelope for nonstationary acceleration time histories, including launch vehicle data, so that components can be designed and tested accordingly. This paper presents the results of the three methods for an actual flight accelerometer record. Guidelines are given for the application of each method to nonstationary data. The method can be extended to other scenarios, including transportation vibration.

  12. Application of qualitative biospeckle methods for the identification of scar region in a green orange

    NASA Astrophysics Data System (ADS)

    Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.

    2018-03-01

    This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.

  13. Comparison of Methods for Adjusting Incorrect Assignments of Items to Subtests: Oblique Multiple Group Method versus Confirmatory Common Factor Method

    ERIC Educational Resources Information Center

    Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.

    2009-01-01

    A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…

  14. Comparison of the Multiple-sample means with composite sample results for fecal indicator bacteria by quantitative PCR and culture

    EPA Science Inventory

    ABSTRACT: Few studies have addressed the efficacy of composite sampling for measurement of indicator bacteria by QPCR. In this study, composite results were compared to single sample results for culture- and QPCR-based water quality monitoring. Composite results for both methods ...

  15. Accounting for the Multiple Natures of Missing Values in Label-Free Quantitative Proteomics Data Sets to Compare Imputation Strategies.

    PubMed

    Lazar, Cosmin; Gatto, Laurent; Ferro, Myriam; Bruley, Christophe; Burger, Thomas

    2016-04-01

    Missing values are a genuine issue in label-free quantitative proteomics. Recent works have surveyed the different statistical methods to conduct imputation and have compared them on real or simulated data sets and recommended a list of missing value imputation methods for proteomics application. Although insightful, these comparisons do not account for two important facts: (i) depending on the proteomics data set, the missingness mechanism may be of different natures and (ii) each imputation method is devoted to a specific type of missingness mechanism. As a result, we believe that the question at stake is not to find the most accurate imputation method in general but instead the most appropriate one. We describe a series of comparisons that support our views: For instance, we show that a supposedly "under-performing" method (i.e., giving baseline average results), if applied at the "appropriate" time in the data-processing pipeline (before or after peptide aggregation) on a data set with the "appropriate" nature of missing values, can outperform a blindly applied, supposedly "better-performing" method (i.e., the reference method from the state-of-the-art). This leads us to formulate few practical guidelines regarding the choice and the application of an imputation method in a proteomics context.

  16. Analytical and Experimental Vibration Analysis of a Faulty Gear System.

    DTIC Science & Technology

    1994-10-01

    Wigner - Ville Distribution ( WVD ) was used to give a comprehensive comparison of the predicted and...experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD’s ability to...of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  17. An Analysis of the Impact of Valve Closure Time on the Course of Water Hammer

    NASA Astrophysics Data System (ADS)

    Kodura, Apoloniusz

    2016-06-01

    The knowledge of transient flow in pressure pipelines is very important for the designing and describing of pressure networks. The water hammer is the most common example of transient flow in pressure pipelines. During this phenomenon, the transformation of kinetic energy into pressure energy causes significant changes in pressure, which can lead to serious problems in the management of pressure networks. The phenomenon is very complex, and a large number of different factors influence its course. In the case of a water hammer caused by valve closing, the characteristic of gate closure is one of the most important factors. However, this factor is rarely investigated. In this paper, the results of physical experiments with water hammer in steel and PE pipelines are described and analyzed. For each water hammer, characteristics of pressure change and valve closing were recorded. The measurements were compared with the results of calculations perfomed by common methods used by engineers - Michaud's equation and Wood and Jones's method. The comparison revealed very significant differences between the results of calculations and the results of experiments. In addition, it was shown that, the characteristic of butterfly valve closure has a significant influence on water hammer, which should be taken into account in analyzing this phenomenon. Comparison of the results of experiments with the results of calculations? may lead to new, improved calculation methods and to new methods to describe transient flow.

  18. Comparison of four statistical and machine learning methods for crash severity prediction.

    PubMed

    Iranitalab, Amirfarrokh; Khattak, Aemal

    2017-11-01

    Crash severity prediction models enable different agencies to predict the severity of a reported crash with unknown severity or the severity of crashes that may be expected to occur sometime in the future. This paper had three main objectives: comparison of the performance of four statistical and machine learning methods including Multinomial Logit (MNL), Nearest Neighbor Classification (NNC), Support Vector Machines (SVM) and Random Forests (RF), in predicting traffic crash severity; developing a crash costs-based approach for comparison of crash severity prediction methods; and investigating the effects of data clustering methods comprising K-means Clustering (KC) and Latent Class Clustering (LCC), on the performance of crash severity prediction models. The 2012-2015 reported crash data from Nebraska, United States was obtained and two-vehicle crashes were extracted as the analysis data. The dataset was split into training/estimation (2012-2014) and validation (2015) subsets. The four prediction methods were trained/estimated using the training/estimation dataset and the correct prediction rates for each crash severity level, overall correct prediction rate and a proposed crash costs-based accuracy measure were obtained for the validation dataset. The correct prediction rates and the proposed approach showed NNC had the best prediction performance in overall and in more severe crashes. RF and SVM had the next two sufficient performances and MNL was the weakest method. Data clustering did not affect the prediction results of SVM, but KC improved the prediction performance of MNL, NNC and RF, while LCC caused improvement in MNL and RF but weakened the performance of NNC. Overall correct prediction rate had almost the exact opposite results compared to the proposed approach, showing that neglecting the crash costs can lead to misjudgment in choosing the right prediction method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Comparing Institution Nitrogen Footprints: Metrics for ...

    EPA Pesticide Factsheets

    When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate their nitrogen footprints using the nitrogen footprint tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This paper compares the results of those seven results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, number of meals served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The results also found differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors that are both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. This paper is being submitt

  20. Testing Multilateral Comparisons in Africa.

    ERIC Educational Resources Information Center

    Bender, M. Lionel

    In this paper, the multilateral comparison method of classifying languages is described and analyzed. It is suggested that while it is espoused as a simple and reasonable approach to language classification, the method has serious flaws. "Multilateral" or "mass" comparison (MC) is not a method of genetic language…

  1. A method of selecting grid size to account for Hertz deformation in finite element analysis of spur gears

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Chao, C. H. C.

    1981-01-01

    A method of selecting grid size for the finite element analysis of gear tooth deflection is presented. The method is based on a finite element study of two cylinders in line contact, where the criterion for establishing element size was that there be agreement with the classical Hertzian solution for deflection. The results are applied to calculate deflection for the gear specimen used in the NASA spur gear test rig. Comparisons are made between the present results and the results of two other methods of calculation. The results have application in design of gear tooth profile modifications to reduce noise and dynamic loads.

  2. Application of MCT Failure Criterion using EFM

    DTIC Science & Technology

    2010-03-26

    because HELIUS:MCT™ does not facilitate this. Attempts have been made to use ABAQUS native thermal expansion model combined in addition to Helius-MCT... ABAQUS using a user defined element subroutine EFM. Comparisons have been made between the analysis results using EFM-MCT code and HELIUS:MCT™ code...using the Element-Failure Method (EFM) in ABAQUS . The EFM-MCT has been implemented in ABAQUS using a user defined element subroutine EFM. Comparisons

  3. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  4. Turbulent Dispersion Modelling in a Complex Urban Environment - Data Analysis and Model Development

    DTIC Science & Technology

    2010-02-01

    Technology Laboratory (Dstl) is used as a benchmark for comparison. Comparisons are also made with some more practically oriented computational fluid dynamics...predictions. To achieve clarity in the range of approaches available for practical models of con- taminant dispersion in urban areas, an overview of...complexity of those methods is simplified to a degree that allows straightforward practical implementation and application. Using these results as a

  5. Acceleration of Binding Site Comparisons by Graph Partitioning.

    PubMed

    Krotzky, Timo; Klebe, Gerhard

    2015-08-01

    The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Comparison of Surface Acoustic Wave Modeling Methods

    NASA Technical Reports Server (NTRS)

    Wilson, W. c.; Atkinson, G. M.

    2009-01-01

    Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method a first order model, and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices.

  7. Comparisons of non-Gaussian statistical models in DNA methylation analysis.

    PubMed

    Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-06-16

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.

  8. Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis

    PubMed Central

    Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-01-01

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687

  9. Comparing Institution Nitrogen Footprints: Metrics for Assessing and Tracking Environmental Impact

    PubMed Central

    Leach, Allison M.; Compton, Jana E.; Galloway, James N.; Andrews, Jennifer

    2017-01-01

    Abstract When multiple institutions with strong sustainability initiatives use a new environmental impact assessment tool, there is an impulse to compare. The first seven institutions to calculate nitrogen footprints using the Nitrogen Footprint Tool have worked collaboratively to improve calculation methods, share resources, and suggest methods for reducing their footprints. This article compares those seven institutions’ results to reveal the common and unique drivers of institution nitrogen footprints. The footprints were compared by scope and sector, and the results were normalized by multiple factors (e.g., population, amount of food served). The comparisons found many consistencies across the footprints, including the large contribution of food. The comparisons identified metrics that could be used to track progress, such as an overall indicator for the nitrogen sustainability of food purchases. The comparisons also pointed to differences in system bounds of the calculations, which are important to standardize when comparing across institutions. The footprints were influenced by factors both within and outside of the institutions’ ability to control, such as size, location, population, and campus use. However, these comparisons also point to a pathway forward for standardizing nitrogen footprint tool calculations, identifying metrics that can be used to track progress, and determining a sustainable institution nitrogen footprint. PMID:29350218

  10. Comparative analysis of methods for detecting interacting loci.

    PubMed

    Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue

    2011-07-05

    Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.

  11. Stationary variational estimates for the effective response and field fluctuations in nonlinear composites

    NASA Astrophysics Data System (ADS)

    Ponte Castañeda, Pedro

    2016-11-01

    This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.

  12. Development and validation of a combined phased acoustical radiosity and image source model for predicting sound fields in rooms.

    PubMed

    Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling

    2015-09-01

    A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.

  13. Comparison of two dietary assessment methods by food consumption: results of the German National Nutrition Survey II.

    PubMed

    Eisinger-Watzl, Marianne; Straßburg, Andrea; Ramünke, Josa; Krems, Carolin; Heuer, Thorsten; Hoffmann, Ingrid

    2015-04-01

    To further characterise the performance of the diet history method and the 24-h recalls method, both in an updated version, a comparison was conducted. The National Nutrition Survey II, representative for Germany, assessed food consumption with both methods. The comparison was conducted in a sample of 9,968 participants aged 14-80. Besides calculating mean differences, statistical agreement measurements encompass Spearman and intraclass correlation coefficients, ranking participants in quartiles and the Bland-Altman method. Mean consumption of 12 out of 18 food groups was higher assessed with the diet history method. Three of these 12 food groups had a medium to large effect size (e.g., raw vegetables) and seven showed at least a small strength while there was basically no difference for coffee/tea or ice cream. Intraclass correlations were strong only for beverages (>0.50) and revealed the least correlation for vegetables (<0.20). Quartile classification of participants exhibited more than two-thirds being ranked in the same or adjacent quartile assessed by both methods. For every food group, Bland-Altman plots showed that the agreement of both methods weakened with increasing consumption. The cognitive effort essential for the diet history method to remember consumption of the past 4 weeks may be a source of inaccurateness, especially for inhomogeneous food groups. Additionally, social desirability gains significance. There is no assessment method without errors and attention to specific food groups is a critical issue with every method. Altogether, the 24-h recalls method applied in the presented study, offers advantages approximating food consumption as compared to the diet history method.

  14. A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies.

    PubMed

    Villena-Martínez, Víctor; Fuster-Guilló, Andrés; Azorín-López, Jorge; Saval-Calvo, Marcelo; Mora-Pascual, Jeronimo; Garcia-Rodriguez, Jose; Garcia-Garcia, Alberto

    2017-01-27

    RGB-D (Red Green Blue and Depth) sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.). In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements.

  15. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  16. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  17. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  18. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  19. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  20. Lipidomic analysis of biological samples: Comparison of liquid chromatography, supercritical fluid chromatography and direct infusion mass spectrometry methods.

    PubMed

    Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal

    2017-11-24

    Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Seizure classification in EEG signals utilizing Hilbert-Huang transform

    PubMed Central

    2011-01-01

    Background Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals. Method Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. Results The t-test results in a P-value < 0.02 and the clustering leads to accurate (94%) and specific (96%) results. The proposed method is also contrasted against the Multivariate Empirical Mode Decomposition that reaches 80% accuracy. Comparison results strengthen the contribution of this paper not only from the accuracy point of view but also with respect to its fast response and ease to use. Conclusion An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool. PMID:21609459

  2. Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM

    NASA Technical Reports Server (NTRS)

    Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.

    2014-01-01

    Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.

  3. Practical methodological guide for hydrometric inter-laboratory organisation

    NASA Astrophysics Data System (ADS)

    Besson, David; Bertrand, Xavier

    2015-04-01

    Discharge measurements performed by the French governmental hydrometer team feed a national database. This data is available for general river flows knowkedge, flood forecasting, low water survey, statistical calculations flow, control flow regulatory and many other uses. Regularly checking the measurements quality and better quantifying its accuracy is therefore an absolute need. The practice of inter-laboratory comparison in hydrometry particularly developed during the last decade. Indeed, discharge measurement can not easily be linked to a standard. Therefore, on-site measurement accuracy control is very difficult. Inter-laboratory comparison is thus a practical solution to this issue. However, it needs some regulations in order to ease its practice and legitimize its results. To do so, the French government hydrometrics teams produced a practical methodological guide for hydrometric inter-laboratory organisation in destination of hydrometers community in view of ensure the harmonization of inter-laboratory comparison practices for different materials (ADCP, current meter on wadind rod or gauging van, tracer dilution, surface speed) and flow range (flood, low water). Ensure the results formalization and banking. The realisation of this practice guide is grounded on the experience of the governmental teams & their partners (or fellows), following existing approaches (Doppler group especially). The guide is designated to validate compliance measures and identify outliers : Hardware, methodological, environmental, or human. Inter-laboratory comparison provides the means to verify the compliance of the instruments (devices + methods + operators) and provides methods to determine an experimental uncertainty of the tested measurement method which is valid only for the site and the measurement conditions but does not address the calibration or periodic monitoring of the few materials. After some conceptual definitions, the guide describes the different stages of an inter-comparison campaign: the campaing creation: targets, participants ( instruments type and number) and site preparation of test protocols and schedule; the campaign set-up (organization): invitation and pre-information of the participants, logistics, field preparation; the campaign conduct: participants reception and information, sequences of tests, results analysis and communication, balance sheet; post-campaign work: further analysis, dissemination and periodic verification of the instruments. This guide is associated with measurement instruments forms, reminding their limits and conditions for use, land forms, used to record all the necessary information during the inter-comparison campaign (site description and measurement conditions, equipment and its settings, and the set of measurements or intermediate calculations to the final results) as well as a calculation tool and banking measures and results.

  4. New insights into soil temperature time series modeling: linear or nonlinear?

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.

  5. A three-term conjugate gradient method under the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  6. Applications of remote sensing, volume 3

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. Of the four change detection techniques (post classification comparison, delta data, spectral/temporal, and layered spectral temporal), the post classification comparison was selected for further development. This was based upon test performances of the four change detection method, straightforwardness of the procedures, and the output products desired. A standardized modified, supervised classification procedure for analyzing the Texas coastal zone data was compiled. This procedure was developed in order that all quadrangles in the study are would be classified using similar analysis techniques to allow for meaningful comparisons and evaluations of the classifications.

  7. Rapid detection of coliforms in drinking water of Arak city using multiplex PCR method in comparison with the standard method of culture (Most Probably Number)

    PubMed Central

    Fatemeh, Dehghan; Reza, Zolfaghari Mohammad; Mohammad, Arjomandzadegan; Salomeh, Kalantari; Reza, Ahmari Gholam; Hossein, Sarmadian; Maryam, Sadrnia; Azam, Ahmadi; Mana, Shojapoor; Negin, Najarian; Reza, Kasravi Alii; Saeed, Falahat

    2014-01-01

    Objective To analyse molecular detection of coliforms and shorten the time of PCR. Methods Rapid detection of coliforms by amplification of lacZ and uidA genes in a multiplex PCR reaction was designed and performed in comparison with most probably number (MPN) method for 16 artificial and 101 field samples. The molecular method was also conducted on isolated coliforms from positive MPN samples; standard sample for verification of microbial method certificated reference material; isolated strains from certificated reference material and standard bacteria. The PCR and electrophoresis parameters were changed for reducing the operation time. Results Results of PCR for lacZ and uidA genes were similar in all of standard, operational and artificial samples and showed the 876 bp and 147 bp bands of lacZ and uidA genes by multiplex PCR. PCR results were confirmed by MPN culture method by sensitivity 86% (95% CI: 0.71-0.93). Also the total execution time, with a successful change of factors, was reduced to less than two and a half hour. Conclusions Multiplex PCR method with shortened operation time was used for the simultaneous detection of total coliforms and Escherichia coli in distribution system of Arak city. It's recommended to be used at least as an initial screening test, and then the positive samples could be randomly tested by MPN. PMID:25182727

  8. International Comparison of Methane-Stabilized He-Ne Lasers

    NASA Astrophysics Data System (ADS)

    Koshelyaevskii, N. B.; Oboukhov, A.; Tatarenkov, V. M.; Titov, A. N.; Chartier, J.-M.; Felder, R.

    1981-01-01

    Two portable methane-stabilized lasers designed at BIPM have been compared with a type a stationary Soviet device developed in VNIIFTRI1. This comparison is one of a series aimed at establishing the coherence of laser wavelength and frequency measurements throughout the world and took place in June 1979. The VNIIFTRI and BIPM lasers using different methods of stabilization, have different optical and mechanical designs and laser tubes. The results of previous measurements, made in VNIIFTRI, of the most important frequency shifts for Soviet lasers together with a method of reproducing their frequency which leads to a precision of 1.10-12 are also presented.

  9. 40 CFR 761.326 - Conducting the comparison study.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Conducting the comparison study. 761...-liquid PCB Remediation Waste Samples § 761.326 Conducting the comparison study. Extract or analyze the comparison study samples using the alternative method. For an alternative extraction method or alternative...

  10. 40 CFR 761.326 - Conducting the comparison study.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Conducting the comparison study. 761...-liquid PCB Remediation Waste Samples § 761.326 Conducting the comparison study. Extract or analyze the comparison study samples using the alternative method. For an alternative extraction method or alternative...

  11. Mass fraction assignment of folic acid in a high purity material

    NASA Astrophysics Data System (ADS)

    Westwood, Steven; Josephs, Ralf; Choteau, Tiphaine; Daireaux, Adeline; Stoppacher, Norbert; Wielgosz, Robert; Davies, Stephen; de Rego, Eliane; Wollinger, Wagner; Garrido, Bruno; Fernandes, Jane; Lima, Jonathan; Oliveira, Rodrigo; de Sena, Rodrigo; Windust, Anthony; Huang, Ting; Dai, Xinhua; Quan, Can; He, Haihong; Zhang, Wei; Wei, Chao; Li, Na; Gao, Dexin; Liu, Zhao; Lo, Man-fung; Wong, Wai-fun; Pfeifer, Dietmar; Koch, Matthias; Dorgerloh, Ute; Rothe, Robert; Philip, Rosemary; Hirari, Nobuyasu; Fazlin Rezali, Mohd; Salazar Arzate, Claudia Marcela; Pedraza Evelina Berenice, Mercado; Serrano Caballero, Victor; Arce Osuna, Mariana; Krylov, A.; Kharitonov, S.; Lopushanskaya, E.; Liu, Qinde; Tang Lin, Teo; Fernandes-Whaley, Maria; Quinn, Laura; Nhlapo, Nontete; Prevoo-Franzsen, Desiree; Archer, Marcelle; Kim, Byungjoo; Baek, Song-Yee; Lee, Sunyoung; Lee, Joonhee; Marbumrung, Sornkrit; Kankaew, Ponhatai; Chaorenpornpukdee, Kanokrat; Chaipet, Thitiphan; Shearman, Kittiya; Ceyhan Goren, Ahmet; Gunduz, Simay; Yilmaz, Hasibe; Un, Ilker; Bilsel, Gokhan; Clarkson, Cailean; Bedner, Mary; Camara, Johanna E.; Lang, Brian E.; Lippa, Katrice A.; Nelson, Michael A.; Toman, Blaza; Yu, Lee L.

    2018-01-01

    The comparison required the assignment of the mass fraction of folic acid present as the main component in the comparison sample. Performance in the comparison is representative of a laboratory's measurement capability for the purity assignment of organic compounds of medium structural complexity [molecular weight range 300–500] and high polarity (pKOW < ‑2). Methods used by the eighteen participating NMIs or DIs were based on a mass balance (summation of impurities) or qNMR approach, or the combination of data obtained using both methods. The qNMR results tended to give slightly lower values for the content of folic acid, albeit with larger associated uncertainties, compared with the results obtained by mass balance procedures. Possible reasons for this divergence are discussed in the report, without reaching a definitive conclusion as to their origin. The comparison demonstrates that for a structurally complex polar organic compound containing a high water content and presenting a number of additional analytical challenges, the assignment of the mass fraction content property value of the main component can reasonably be achieved with an associated relative standard uncertainty in the assigned value of 0.5% Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  12. An enquiry into the method of paired comparison: reliability, scaling, and Thurstone's Law of Comparative Judgment

    Treesearch

    Thomas C. Brown; George L. Peterson

    2009-01-01

    The method of paired comparisons is used to measure individuals' preference orderings of items presented to them as discrete binary choices. This paper reviews the theory and application of the paired comparison method, describes a new computer program available for eliciting the choices, and presents an analysis of methods for scaling paired choice data to...

  13. Comparison of microstickies measurement methods. Part I, sample preparation and measurement methods

    Treesearch

    Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R.A. Venditti; K. Copeland; H.-M. Chang

    2003-01-01

    Recently, we completed a project on the comparison of macrostickies measurement methods. Based on the success of the project, we decided to embark on this new project on comparison of microstickies measurement methods. When we started this project, there were some concerns and doubts principally due to the lack of an accepted definition of microstickies. However, we...

  14. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data: SPSS TwoStep Cluster analysis, Latent Gold and SNOB.

    PubMed

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-10-02

    There are various methodological approaches to identifying clinically important subgroups and one method is to identify clusters of characteristics that differentiate people in cross-sectional and/or longitudinal data using Cluster Analysis (CA) or Latent Class Analysis (LCA). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold LCA and SNOB LCA). The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program's ease of use and interpretability of the presentation of results.We analysed five real datasets of varying complexity in a secondary analysis of data from other research projects. Three datasets contained only MRI findings (n = 2,060 to 20,810 vertebral disc levels), one dataset contained only pain intensity data collected for 52 weeks by text (SMS) messaging (n = 1,121 people), and the last dataset contained a range of clinical variables measured in low back pain patients (n = 543 people). Four artificial datasets (n = 1,000 each) containing subgroups of varying complexity were also analysed testing the ability of these clustering methods to detect subgroups and correctly classify individuals when subgroup membership was known. The results from the real clinical datasets indicated that the number of subgroups detected varied, the certainty of classifying individuals into those subgroups varied, the findings had perfect reproducibility, some programs were easier to use and the interpretability of the presentation of their findings also varied. The results from the artificial datasets indicated that all three clustering methods showed a near-perfect ability to detect known subgroups and correctly classify individuals into those subgroups. Our subjective judgement was that Latent Gold offered the best balance of sensitivity to subgroups, ease of use and presentation of results with these datasets but we recognise that different clustering methods may suit other types of data and clinical research questions.

  15. Comparison of updated Lagrangian FEM with arbitrary Lagrangian Eulerian method for 3D thermo-mechanical extrusion of a tube profile

    NASA Astrophysics Data System (ADS)

    Kronsteiner, J.; Horwatitsch, D.; Zeman, K.

    2017-10-01

    Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.

  16. Comparison of sampling and test methods for determining asphalt content and moisture correction in asphalt concrete mixtures.

    DOT National Transportation Integrated Search

    1985-03-01

    The purpose of this report is to identify the difference, if any, in AASHTO and OSHD test procedures and results. This report addresses the effect of the size of samples taken in the field and evaluates the methods of determining the moisture content...

  17. Empirical methods in the evaluation of estimators

    Treesearch

    Gerald S. Walton; C.J. DeMars; C.J. DeMars

    1973-01-01

    The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...

  18. A Comparison of Conventional Linear Regression Methods and Neural Networks for Forecasting Educational Spending.

    ERIC Educational Resources Information Center

    Baker, Bruce D.; Richards, Craig E.

    1999-01-01

    Applies neural network methods for forecasting 1991-95 per-pupil expenditures in U.S. public elementary and secondary schools. Forecasting models included the National Center for Education Statistics' multivariate regression model and three neural architectures. Regarding prediction accuracy, neural network results were comparable or superior to…

  19. Comparison of Observational Methods and Their Relation to Ratings of Engagement in Young Children

    ERIC Educational Resources Information Center

    Wood, Brenna K.; Hojnoski, Robin L.; Laracy, Seth D.; Olson, Christopher L.

    2016-01-01

    Although, collectively, results of earlier direct observation studies suggest momentary time sampling (MTS) may offer certain technical advantages over whole-interval (WIR) and partial-interval (PIR) recording, no study has compared these methods for measuring engagement in young children in naturalistic environments. This study compared direct…

  20. Calculation of the bending of electromechanical aircraft element made of the carbon fiber

    NASA Astrophysics Data System (ADS)

    Danilova-Volkovskaya, Galina; Chepurnenko, Anton; Begak, Aleksandr; Savchenko, Andrey

    2017-10-01

    We consider a method of calculation of an orthotropic plate with variable thickness. The solution is performed numerically by the finite element method. The calculation is made for the springs of a hang glider made of carbon fiber. The comparison of the results with Sofistik software complex is given.

  1. Reflective Voices: Understanding University Students' Experiences of Urban High School Physical Education

    ERIC Educational Resources Information Center

    Lackman, Jeremy; Chepyator-Thomson, Jepkorir

    2017-01-01

    Purpose: The purpose of this study was to understand first-year college students' reflections on past physical education (PE) experiences in urban high school settings. Method: Data collection included semi-structured, open-ended, qualitative interviews. Constant comparison method was used for data analysis. Results: Several findings emerged: (a)…

  2. Monitoring of urinary calcium and phosphorus excretion in preterm infants: comparison of 2 methods.

    PubMed

    Staub, Eveline; Wiedmer, Nicolas; Staub, Lukas P; Nelle, Mathias; von Vigier, Rodo O

    2014-04-01

    Premature babies require supplementation with calcium (Ca) and phosphorus (P) to prevent metabolic bone disease of prematurity. To guide mineral supplementation, 2 methods of monitoring urinary excretion of Ca and P are used: urinary Ca or P concentration and Ca/creatinine (Crea) or P/Crea ratios. We compare these 2 methods in regards to their agreement on the need for mineral supplementation. Retrospective chart review of 230 premature babies with birth weight <1500 g, undergoing screening of urinary spot samples from day 21 of life and fortnightly thereafter. Hypothetical cutoff values for urine Ca or P concentration (1 mmol/L) and urine Ca/Crea ratio (0.5 mol/mol) or P/Crea ratio (4 mol/mol) were applied to the sample results. The agreement on whether to supplement the respective minerals based on the results with the 2 methods was compared. Multivariate general linear models sought to identify patient characteristics to predict discordant results. A total of 24.8% of cases did not agree on the indication for Ca supplementation, and 8.8% for P. Total daily Ca intake was the only patient characteristic associated with discordant results. With the intention to supplement the respective mineral, comparison of urinary mineral concentration with mineral/Crea ratio is moderate for Ca and good for P. The results do not allow identifying superiority of either method on the decision as to which babies require Ca and/or P supplements.

  3. Stirling Analysis Comparison of Commercial vs. High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2007-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/ proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's Compact scheme and Dyson s Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model although sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  4. Stirling Analysis Comparison of Commercial Versus High-Order Methods

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2005-01-01

    Recently, three-dimensional Stirling engine simulations have been accomplished utilizing commercial Computational Fluid Dynamics software. The validations reported can be somewhat inconclusive due to the lack of precise time accurate experimental results from engines, export control/proprietary concerns, and the lack of variation in the methods utilized. The last issue may be addressed by solving the same flow problem with alternate methods. In this work, a comprehensive examination of the methods utilized in the commercial codes is compared with more recently developed high-order methods. Specifically, Lele's compact scheme and Dyson's Ultra Hi-Fi method will be compared with the SIMPLE and PISO methods currently employed in CFD-ACE, FLUENT, CFX, and STAR-CD (all commercial codes which can in theory solve a three-dimensional Stirling model with sliding interfaces and their moving grids limit the effective time accuracy). We will initially look at one-dimensional flows since the current standard practice is to design and optimize Stirling engines with empirically corrected friction and heat transfer coefficients in an overall one-dimensional model. This comparison provides an idea of the range in which commercial CFD software for modeling Stirling engines may be expected to provide accurate results. In addition, this work provides a framework for improving current one-dimensional analysis codes.

  5. Standardized Methods to Generate Mock (Spiked) Clinical Specimens by Spiking Blood or Plasma with Cultured Pathogens

    PubMed Central

    Dong, Ming; Fisher, Carolyn; Añez, Germán; Rios, Maria; Nakhasi, Hira L.; Hobson, J. Peyton; Beanan, Maureen; Hockman, Donna; Grigorenko, Elena; Duncan, Robert

    2016-01-01

    Aims To demonstrate standardized methods for spiking pathogens into human matrices for evaluation and comparison among diagnostic platforms. Methods and Results This study presents detailed methods for spiking bacteria or protozoan parasites into whole blood and virus into plasma. Proper methods must start with a documented, reproducible pathogen source followed by steps that include standardized culture, preparation of cryopreserved aliquots, quantification of the aliquots by molecular methods, production of sufficient numbers of individual specimens and testing of the platform with multiple mock specimens. Results are presented following the described procedures that showed acceptable reproducibility comparing in-house real-time PCR assays to a commercially available multiplex molecular assay. Conclusions A step by step procedure has been described that can be followed by assay developers who are targeting low prevalence pathogens. Significance and Impact of Study The development of diagnostic platforms for detection of low prevalence pathogens such as biothreat or emerging agents is challenged by the lack of clinical specimens for performance evaluation. This deficit can be overcome using mock clinical specimens made by spiking cultured pathogens into human matrices. To facilitate evaluation and comparison among platforms, standardized methods must be followed in the preparation and application of spiked specimens. PMID:26835651

  6. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  7. Comparing the performance of biomedical clustering methods.

    PubMed

    Wiwie, Christian; Baumbach, Jan; Röttger, Richard

    2015-11-01

    Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future. This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide-ranging comparison we were able to develop a short guideline for biomedical clustering tasks. ClustEval allows biomedical researchers to pick the appropriate tool for their data type and allows method developers to compare their tool to the state of the art.

  8. 4D Flexible Atom-Pairs: An efficient probabilistic conformational space comparison for ligand-based virtual screening

    PubMed Central

    2011-01-01

    Background The performance of 3D-based virtual screening similarity functions is affected by the applied conformations of compounds. Therefore, the results of 3D approaches are often less robust than 2D approaches. The application of 3D methods on multiple conformer data sets normally reduces this weakness, but entails a significant computational overhead. Therefore, we developed a special conformational space encoding by means of Gaussian mixture models and a similarity function that operates on these models. The application of a model-based encoding allows an efficient comparison of the conformational space of compounds. Results Comparisons of our 4D flexible atom-pair approach with over 15 state-of-the-art 2D- and 3D-based virtual screening similarity functions on the 40 data sets of the Directory of Useful Decoys show a robust performance of our approach. Even 3D-based approaches that operate on multiple conformers yield inferior results. The 4D flexible atom-pair method achieves an averaged AUC value of 0.78 on the filtered Directory of Useful Decoys data sets. The best 2D- and 3D-based approaches of this study yield an AUC value of 0.74 and 0.72, respectively. As a result, the 4D flexible atom-pair approach achieves an average rank of 1.25 with respect to 15 other state-of-the-art similarity functions and four different evaluation metrics. Conclusions Our 4D method yields a robust performance on 40 pharmaceutically relevant targets. The conformational space encoding enables an efficient comparison of the conformational space. Therefore, the weakness of the 3D-based approaches on single conformations is circumvented. With over 100,000 similarity calculations on a single desktop CPU, the utilization of the 4D flexible atom-pair in real-world applications is feasible. PMID:21733172

  9. An Experimental Comparison of Two Methods Of Teaching Numerical Control Manual Programming Concepts; Visual Media Versus Hands-On Equipment.

    ERIC Educational Resources Information Center

    Biekert, Russell

    Accompanying the rapid changes in technology has been a greater dependence on automation and numerical control, which has resulted in the need to find ways of preparing programers for industrial machines using numerical control. To compare the hands-on equipment method and a visual media method of teaching numerical control, an experimental and a…

  10. Comparison of direct and indirect methods for assessing leaf area index across a tropical rain forest landscape

    Treesearch

    Paulo C. Olivas; Steven F. Oberbauer; David B. Clark; Deborah A. Clark; Michael G. Ryan; Joseph J. O' Brien; Harlyn Ordonez

    2013-01-01

    Many functional properties of forests depend on the leaf area; however, measuring leaf area is not trivial in tall evergreen vegetation. As a result, leaf area is generally estimated indirectly by light absorption methods. These indirect methods are widely used, but have never been calibrated against direct measurements in tropical rain forests, either at point or...

  11. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  12. Unfolding sphere size distributions with a density estimator based on Tikhonov regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weese, J.; Korat, E.; Maier, D.

    1997-12-01

    This report proposes a method for unfolding sphere size distributions given a sample of radii that combines the advantages of a density estimator with those of Tikhonov regularization methods. The following topics are discusses in this report to achieve this method: the relation between the profile and the sphere size distribution; the method for unfolding sphere size distributions; the results based on simulations; and the experimental data comparison.

  13. Comparison of measurement methods for capacitive tactile sensors and their implementation

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Sienkiewicz, Rafał

    2015-09-01

    This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.

  14. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  15. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method

    PubMed Central

    Niks, Irene; Gevers, Josette

    2018-01-01

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care. PMID:29438350

  16. Application of the vortex-lattice technique to the analysis of thin wings with vortex separation and thick multi-element wings

    NASA Technical Reports Server (NTRS)

    Smith, C. W.; Bhateley, I. C.

    1976-01-01

    Two techniques for extending the range of applicability of the basic vortex-lattice method are discussed. The first improves the computation of aerodynamic forces on thin, low-aspect-ratio wings of arbitrary planforms at subsonic Mach numbers by including the effects of leading-edge and tip vortex separation, characteristic of this type wing, through use of the well-known suction-analogy method of E. C. Polhamus. Comparisons with experimental data for a variety of planforms are presented. The second consists of the use of the vortex-lattice method to predict pressure distributions over thick multi-element wings (wings with leading- and trailing-edge devices). A method of laying out the lattice is described which gives accurate pressures on the top and part of the bottom surface of the wing. Limited comparisons between the result predicted by this method, the conventional lattice arrangement method, experimental data, and 2-D potential flow analysis techniques are presented.

  17. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  18. Work Stress Interventions in Hospital Care: Effectiveness of the DISCovery Method.

    PubMed

    Niks, Irene; de Jonge, Jan; Gevers, Josette; Houtman, Irene

    2018-02-13

    Effective interventions to prevent work stress and to improve health, well-being, and performance of employees are of the utmost importance. This quasi-experimental intervention study presents a specific method for diagnosis of psychosocial risk factors at work and subsequent development and implementation of tailored work stress interventions, the so-called DISCovery method. This method aims at improving employee health, well-being, and performance by optimizing the balance between job demands, job resources, and recovery from work. The aim of the study is to quantitatively assess the effectiveness of the DISCovery method in hospital care. Specifically, we used a three-wave longitudinal, quasi-experimental multiple-case study approach with intervention and comparison groups in health care work. Positive changes were found for members of the intervention groups, relative to members of the corresponding comparison groups, with respect to targeted work-related characteristics and targeted health, well-being, and performance outcomes. Overall, results lend support for the effectiveness of the DISCovery method in hospital care.

  19. Comparison of diffusion- and pumped-sampling methods to monitor volatile organic compounds in ground water, Massachusetts Military Reservation, Cape Cod, Massachusetts, July 1999-December 2002

    USGS Publications Warehouse

    Archfield, Stacey A.; LeBlanc, Denis R.

    2005-01-01

    To evaluate diffusion sampling as an alternative method to monitor volatile organic compound (VOC) concentrations in ground water, concentrations in samples collected by traditional pumped-sampling methods were compared to concentrations in samples collected by diffusion-sampling methods for 89 monitoring wells at or near the Massachusetts Military Reservation, Cape Cod. Samples were analyzed for 36 VOCs. There was no substantial difference between the utility of diffusion and pumped samples to detect the presence or absence of a VOC. In wells where VOCs were detected, diffusion-sample concentrations of tetrachloroethene (PCE) and trichloroethene (TCE) were significantly lower than pumped-sample concentrations. Because PCE and TCE concentrations detected in the wells dominated the calculation of many of the total VOC concentrations, when VOC concentrations were summed and compared by sampling method, visual inspection also showed a downward concentration bias in the diffusion-sample concentration. The degree to which pumped- and diffusion-sample concentrations agreed was not a result of variability inherent within the sampling methods or the diffusion process itself. A comparison of the degree of agreement in the results from the two methods to 13 quantifiable characteristics external to the sampling methods offered only well-screen length as being related to the degree of agreement between the methods; however, there is also evidence to indicate that the flushing rate of water through the well screen affected the agreement between the sampling methods. Despite poor agreement between the concentrations obtained by the two methods at some wells, the degree to which the concentrations agree at a given well is repeatable. A one-time, well-bywell comparison between diffusion- and pumped-sampling methods could determine which wells are good candidates for the use of diffusion samplers. For wells with good method agreement, the diffusion-sampling method is a time-saving and cost-effective alternative to pumped-sampling methods in a long-term monitoring program, such as at the Massachusetts Military Reservation.

  20. In situ electronic probing of semiconducting nanowires in an electron microscope.

    PubMed

    Fauske, V T; Erlbeck, M B; Huh, J; Kim, D C; Munshi, A M; Dheeraj, D L; Weman, H; Fimland, B O; Van Helvoort, A T J

    2016-05-01

    For the development of electronic nanoscale structures, feedback on its electronic properties is crucial, but challenging. Here, we present a comparison of various in situ methods for electronically probing single, p-doped GaAs nanowires inside a scanning electron microscope. The methods used include (i) directly probing individual as-grown nanowires with a sharp nano-manipulator, (ii) contacting dispersed nanowires with two metal contacts and (iii) contacting dispersed nanowires with four metal contacts. For the last two cases, we compare the results obtained using conventional ex situ litho-graphy contacting techniques and by in situ, direct-write electron beam induced deposition of a metal (Pt). The comparison shows that 2-probe measurements gives consistent results also with contacts made by electron beam induced deposition, but that for 4-probe, stray deposition can be a problem for shorter nanowires. This comparative study demonstrates that the preferred in situ method depends on the required throughput and reliability. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  1. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  2. A comparison of essential oils obtained from lavandin via different extraction processes: Ultrasound, microwave, turbohydrodistillation, steam and hydrodistillation.

    PubMed

    Périno-Issartier, Sandrine; Ginies, Christian; Cravotto, Giancarlo; Chemat, Farid

    2013-08-30

    A total of eight extraction techniques ranging from conventional methods (hydrodistillation (HD), steam distillation (SD), turbohydrodistillation (THD)), through innovative techniques (ultrasound assisted extraction (US-SD) and finishing with microwave assisted extraction techniques such as In situ microwave-generated hydrodistillation (ISMH), microwave steam distillation (MSD), microwave hydrodiffusion and gravity (MHG), and microwave steam diffusion (MSDf)) were used to extract essential oil from lavandin flowers and their results were compared. Extraction time, yield, essential oil composition and sensorial analysis were considered as the principal terms of comparison. The essential oils extracted using the more innovative processes were quantitatively (yield) and qualitatively (aromatic profile) similar to those obtained from the conventional techniques. The method which gave the best results was the microwave hydrodiffusion and gravity (MHG) method which gave reduced extraction time (30min against 220min for SD) and gave no differences in essential oil yield and sensorial perception. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Dynamical downscaling inter-comparison for high resolution climate reconstruction

    NASA Astrophysics Data System (ADS)

    Ferreira, J.; Rocha, A.; Castanheira, J. M.; Carvalho, A. C.

    2012-04-01

    In the scope of the project: "High-resolution Rainfall EroSivity analysis and fORecasTing - RESORT", an evaluation of various methods of dynamic downscaling is presented. The methods evaluated range from the classic method of nesting a regional model results in a global model, in this case the ECMWF reanalysis, to more recently proposed methods, which consist in using Newtonian relaxation methods in order to nudge the results of the regional model to the reanalysis. The method with better results involves using a system of variational data assimilation to incorporate observational data with results from the regional model. The climatology of a simulation of 5 years using this method is tested against observations on mainland Portugal and the ocean in the area of the Portuguese Continental Shelf, which shows that the method developed is suitable for the reconstruction of high resolution climate over continental Portugal.

  4. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    USGS Publications Warehouse

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.

  5. Meta-analysis of haplotype-association studies: comparison of methods and empirical evaluation of the literature

    PubMed Central

    2011-01-01

    Background Meta-analysis is a popular methodology in several fields of medical research, including genetic association studies. However, the methods used for meta-analysis of association studies that report haplotypes have not been studied in detail. In this work, methods for performing meta-analysis of haplotype association studies are summarized, compared and presented in a unified framework along with an empirical evaluation of the literature. Results We present multivariate methods that use summary-based data as well as methods that use binary and count data in a generalized linear mixed model framework (logistic regression, multinomial regression and Poisson regression). The methods presented here avoid the inflation of the type I error rate that could be the result of the traditional approach of comparing a haplotype against the remaining ones, whereas, they can be fitted using standard software. Moreover, formal global tests are presented for assessing the statistical significance of the overall association. Although the methods presented here assume that the haplotypes are directly observed, they can be easily extended to allow for such an uncertainty by weighting the haplotypes by their probability. Conclusions An empirical evaluation of the published literature and a comparison against the meta-analyses that use single nucleotide polymorphisms, suggests that the studies reporting meta-analysis of haplotypes contain approximately half of the included studies and produce significant results twice more often. We show that this excess of statistically significant results, stems from the sub-optimal method of analysis used and, in approximately half of the cases, the statistical significance is refuted if the data are properly re-analyzed. Illustrative examples of code are given in Stata and it is anticipated that the methods developed in this work will be widely applied in the meta-analysis of haplotype association studies. PMID:21247440

  6. A COMPARISON OF FLARE FORECASTING METHODS. I. RESULTS FROM THE “ALL-CLEAR” WORKSHOP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, G.; Leka, K. D.; Dunn, T.

    2016-10-01

    Solar flares produce radiation that can have an almost immediate effect on the near-Earth environment, making it crucial to forecast flares in order to mitigate their negative effects. The number of published approaches to flare forecasting using photospheric magnetic field observations has proliferated, with varying claims about how well each works. Because of the different analysis techniques and data sets used, it is essentially impossible to compare the results from the literature. This problem is exacerbated by the low event rates of large solar flares. The challenges of forecasting rare events have long been recognized in the meteorology community, butmore » have yet to be fully acknowledged by the space weather community. During the interagency workshop on “all clear” forecasts held in Boulder, CO in 2009, the performance of a number of existing algorithms was compared on common data sets, specifically line-of-sight magnetic field and continuum intensity images from the Michelson Doppler Imager, with consistent definitions of what constitutes an event. We demonstrate the importance of making such systematic comparisons, and of using standard verification statistics to determine what constitutes a good prediction scheme. When a comparison was made in this fashion, no one method clearly outperformed all others, which may in part be due to the strong correlations among the parameters used by different methods to characterize an active region. For M-class flares and above, the set of methods tends toward a weakly positive skill score (as measured with several distinct metrics), with no participating method proving substantially better than climatological forecasts.« less

  7. Comparison of a 50 mL pycnometer and a 500 mL flask, EURAMET.M.FF.S8 (EURAMET 1297)

    NASA Astrophysics Data System (ADS)

    Mićić, Ljiljana; Batista, Elsa

    2018-01-01

    The purpose of this comparison was to compare the results of the participating laboratories in the calibration of 50 mL pycnometer and 500 mL volumetric flask using the gravimetric method. Laboratories were asked to determined the 'contained' volume of the 50 mL pycnometer and of the 500 mL flask at a reference temperature of 20 °C. The gravimetric method was used for both instruments by all laboratories. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  8. FRASS: the web-server for RNA structural comparison

    PubMed Central

    2010-01-01

    Background The impressive increase of novel RNA structures, during the past few years, demands automated methods for structure comparison. While many algorithms handle only small motifs, few techniques, developed in recent years, (ARTS, DIAL, SARA, SARSA, and LaJolla) are available for the structural comparison of large and intact RNA molecules. Results The FRASS web-server represents a RNA chain with its Gauss integrals and allows one to compare structures of RNA chains and to find similar entries in a database derived from the Protein Data Bank. We observed that FRASS scores correlate well with the ARTS and LaJolla similarity scores. Moreover, the-web server can also reproduce satisfactorily the DARTS classification of RNA 3D structures and the classification of the SCOR functions that was obtained by the SARA method. Conclusions The FRASS web-server can be easily used to detect relationships among RNA molecules and to scan efficiently the rapidly enlarging structural databases. PMID:20553602

  9. Strategies and tools for whole genome alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couronne, Olivier; Poliakov, Alexander; Bray, Nicolas

    2002-11-25

    The availability of the assembled mouse genome makespossible, for the first time, an alignment and comparison of two largevertebrate genomes. We have investigated different strategies ofalignment for the subsequent analysis of conservation of genomes that areeffective for different quality assemblies. These strategies were appliedto the comparison of the working draft of the human genome with the MouseGenome Sequencing Consortium assembly, as well as other intermediatemouse assemblies. Our methods are fast and the resulting alignmentsexhibit a high degree of sensitivity, covering more than 90 percent ofknown coding exons in the human genome. We have obtained such coveragewhile preserving specificity. With amore » view towards the end user, we havedeveloped a suite of tools and websites for automatically aligning, andsubsequently browsing and working with whole genome comparisons. Wedescribe the use of these tools to identify conserved non-coding regionsbetween the human and mouse genomes, some of which have not beenidentified by other methods.« less

  10. On necessary conditions for the comparison principle and the sub- and supersolution method for the stationary Kirchhoff equation

    NASA Astrophysics Data System (ADS)

    Iturriaga, Leonelo; Massa, Eugenio

    2018-01-01

    In this paper, we propose a counterexample to the validity of the comparison principle and of the sub- and supersolution method for nonlocal problems like the stationary Kirchhoff equation. This counterexample shows that in general smooth bounded domains in any dimension, these properties cannot hold true if the nonlinear nonlocal term M (∥u∥ 2 ) is somewhere increasing with respect to the H01-norm of the solution. Comparing with the existing results, this fills a gap between known conditions on M that guarantee or prevent these properties and leads to a condition that is necessary and sufficient for the validity of the comparison principle. It is worth noting that equations similar to the one considered here have gained interest recently for appearing in models of thermo-convective flows of non-Newtonian fluids or of electrorheological fluids, among others.

  11. Comparison of broth macrodilution, broth microdilution, and E test antifungal susceptibility tests for fluconazole.

    PubMed Central

    Sewell, D L; Pfaller, M A; Barry, A L

    1994-01-01

    A comparison of the E test, the broth microdilution test, and the reference broth macrodilution susceptibility test of the National Committee for Clinical Laboratory Standards for fluconazole susceptibility testing was performed with 238 clinical isolates of Candida species and Torulopsis (Candida) glabrata. An 80% inhibition endpoint MIC was determined by the reference broth macrodilution method after 48 h of incubation. The MICs obtained by the two study methods were read after 24 and 48 h of incubation. Overall, excellent agreement within 2 doubling dilutions was obtained between the broth microdilution and the broth macrodilution methods for the combined results for all species at both 24 h (93%) and 48 h (94%). The correlation of 24-h MIC endpoints between the E test and the broth macrodilution methods was 37% for T. glabrata, 56% for Candida tropicalis, 93% for Candida albicans, and 90% for other Candida species. The percent agreement at 48 h ranged from 34% for T. glabrata to 97% for Candida species other than C. albicans and C. tropicalis. These initial results support the further evaluation of the E test as an alternative method for fluconazole susceptibility testing of Candida species. PMID:7814531

  12. Digital pulse shape discrimination methods for n-γ separation in an EJ-301 liquid scintillation detector

    NASA Astrophysics Data System (ADS)

    Wan, Bo; Zhang, Xue-Ying; Chen, Liang; Ge, Hong-Lin; Ma, Fei; Zhang, Hong-Bin; Ju, Yong-Qin; Zhang, Yan-Bin; Li, Yan-Yan; Xu, Xiao-Wei

    2015-11-01

    A digital pulse shape discrimination system based on a programmable module NI-5772 has been established and tested with an EJ-301 liquid scintillation detector. The module was operated by running programs developed in LabVIEW, with a sampling frequency up to 1.6 GS/s. Standard gamma sources 22Na, 137Cs and 60Co were used to calibrate the EJ-301 liquid scintillation detector, and the gamma response function was obtained. Digital algorithms for the charge comparison method and zero-crossing method have been developed. The experimental results show that both digital signal processing (DSP) algorithms can discriminate neutrons from γ-rays. Moreover, the zero-crossing method shows better n-γ discrimination at 80 keVee and lower, whereas the charge comparison method gives better results at higher thresholds. In addition, the figure-of-merit (FOM) for detectors of two different dimensions were extracted at 9 energy thresholds, and it was found that the smaller detector presented better n-γ separation for fission neutrons. Supported by National Natural Science Foundation of China (91226107, 11305229) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA03030300)

  13. Applying wavelet transforms to analyse aircraft-measured turbulence and turbulent fluxes in the atmospheric boundary layer over eastern Siberia

    NASA Astrophysics Data System (ADS)

    Strunin, M. A.; Hiyama, T.

    2004-11-01

    The wavelet spectral method was applied to aircraft-based measurements of atmospheric turbulence obtained during joint Russian-Japanese research on the atmospheric boundary layer near Yakutsk (eastern Siberia) in April-June 2000. Practical ways to apply Fourier and wavelet methods for aircraft-based turbulence data are described. Comparisons between Fourier and wavelet transform results are shown and they demonstrate, in conjunction with theoretical and experimental restrictions, that the Fourier transform method is not useful for studying non-homogeneous turbulence. The wavelet method is free from many disadvantages of Fourier analysis and can yield more informative results. Comparison of Fourier and Morlet wavelet spectra showed good agreement at high frequencies (small scales). The quality of the wavelet transform and corresponding software was estimated by comparing the original data with restored data constructed with an inverse wavelet transform. A Haar wavelet basis was inappropriate for the turbulence data; the mother wavelet function recommended in this study is the Morlet wavelet. Good agreement was also shown between variances and covariances estimated with different mathematical techniques, i.e. through non-orthogonal wavelet spectra and through eddy correlation methods.

  14. Optimal Threshold Determination for Interpreting Semantic Similarity and Particularity: Application to the Comparison of Gene Sets and Metabolic Pathways Using GO and ChEBI

    PubMed Central

    Bettembourg, Charles; Diot, Christian; Dameron, Olivier

    2015-01-01

    Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274

  15. A study of autonomous satellite navigation methods using the global positioning satellite system

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.

    1980-01-01

    Special orbit determination algorithms were developed to accommodate the size and speed limitations of on-board computer systems of the NAVSTAR Global Positioning System. The algorithms use square root sequential filtering methods. A new method for the time update of the square root covariance matrix was also developed. In addition, the time update method was compared with another square root convariance propagation method to determine relative performance characteristics. Comparisions were based on the results of computer simulations of the LANDSAT-D satellite processing pseudo range and pseudo range-rate measurements from the phase one GPS. A summary of the comparison results is presented.

  16. Comparison of different approaches of modelling in a masonry building

    NASA Astrophysics Data System (ADS)

    Saba, M.; Meloni, D.

    2017-12-01

    The present work has the objective to model a simple masonry building, through two different modelling methods in order to assess their validity in terms of evaluation of static stresses. Have been chosen two of the most commercial software used to address this kind of problem, which are of S.T.A. Data S.r.l. and Sismicad12 of Concrete S.r.l. While the 3Muri software adopts the Frame by Macro Elements Method (FME), which should be more schematic and more efficient, Sismicad12 software uses the Finite Element Method (FEM), which guarantees accurate results, with greater computational burden. Remarkably differences of the static stresses, for such a simple structure between the two approaches have been found, and an interesting comparison and analysis of the reasons is proposed.

  17. Numerical simulation and comparison of two ventilation methods for a restaurant - displacement vs mixed flow ventilation

    NASA Astrophysics Data System (ADS)

    Chitaru, George; Berville, Charles; Dogeanu, Angel

    2018-02-01

    This paper presents a comparison between a displacement ventilation method and a mixed flow ventilation method using computational fluid dynamics (CFD) approach. The paper analyses different aspects of the two systems, like the draft effect in certain areas, the air temperatureand velocity distribution in the occupied zone. The results highlighted that the displacement ventilation system presents an advantage for the current scenario, due to the increased buoyancy driven flows caused by the interior heat sources. For the displacement ventilation case the draft effect was less prone to appear in the occupied zone but the high heat emissions from the interior sources have increased the temperature gradient in the occupied zone. Both systems have been studied in similar conditions, concentrating only on the flow patterns for each case.

  18. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  19. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    PubMed

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. APMP supplementary comparison (APMP.L-S8) measurement of flatness of optical flat by interferometry

    NASA Astrophysics Data System (ADS)

    Buajarern, J.; Bitou, Y.; Zi, X.; Zhao, L.; Swift, N.; Agarwal, A.; Hungwe, F.

    2018-01-01

    A reginal supplementary comparison, APMP.L-S8, was started in 2015 to demonstrate the equivalence of routine calibration services offered by NMIs to clients. Participants in this APMP.L-S8 comparison agreed to apply interferometric method for flatness measurement of the optical flats. There are two configurations of flatness interferometer used in this comparison, vertical type and horizontal type. There are seven laboratories from NMIs participated this supplementary comparison which included NIMT, NMIJ, NIM, NMC/A*STAR, MSL, NPLI and NMISA. This report describes the measurement results of two optical flats, diameter of 70 mm and 160 mm. The calibrations of this comparison were carried out by participants during the period from July 2015 to September 2016. The results show that there is a degree of equivalence below 1 for all measurands. Hence, there is a close agreement between the measurements from all participants. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCL, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  2. Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics

    PubMed Central

    Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier

    2013-01-01

    Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528

  3. Assessment of regional air quality by a concentration-dependent Pollution Permeation Index

    PubMed Central

    Liang, Chun-Sheng; Liu, Huan; He, Ke-Bin; Ma, Yong-Liang

    2016-01-01

    Although air quality monitoring networks have been greatly improved, interpreting their expanding data in both simple and efficient ways remains challenging. Therefore, needed are new analytical methods. We developed such a method based on the comparison of pollutant concentrations between target and circum areas (circum comparison for short), and tested its applications by assessing the air pollution in Jing-Jin-Ji, Yangtze River Delta, Pearl River Delta and Cheng-Yu, China during 2015. We found the circum comparison can instantly judge whether a city is a pollution permeation donor or a pollution permeation receptor by a Pollution Permeation Index (PPI). Furthermore, a PPI-related estimated concentration (original concentration plus halved average concentration difference) can be used to identify some overestimations and underestimations. Besides, it can help explain pollution process (e.g., Beijing’s PM2.5 maybe largely promoted by non-local SO2) though not aiming at it. Moreover, it is applicable to any region, easy-to-handle, and able to boost more new analytical methods. These advantages, despite its disadvantages in considering the whole process jointly influenced by complex physical and chemical factors, demonstrate that the PPI based circum comparison can be efficiently used in assessing air pollution by yielding instructive results, without the absolute need for complex operations. PMID:27731344

  4. Stability of outer planetary orbits around binary stars - A comparison of Hill's and Laplace's stability criteria

    NASA Technical Reports Server (NTRS)

    Kubala, A.; Black, D.; Szebehely, V.

    1993-01-01

    A comparison is made between the stability criteria of Hill and that of Laplace to determine the stability of outer planetary orbits encircling binary stars. The restricted, analytically determined results of Hill's method by Szebehely and coworkers and the general, numerically integrated results of Laplace's method by Graziani and Black (1981) are compared for varying values of the mass parameter mu. For mu = 0 to 0.15, the closest orbit (lower limit of radius) an outer planet in a binary system can have and still remain stable is determined by Hill's stability criterion. For mu greater than 0.15, the critical radius is determined by Laplace's stability criterion. It appears that the Graziani-Black stability criterion describes the critical orbit within a few percent for all values of mu.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frayce, D.; Khayat, R.E.; Derdouri, A.

    The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less

  6. From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms.

    PubMed

    Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan

    2014-07-01

    Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

  7. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  8. Intercomparison of methods of coupling between convection and large-scale circulation. 1. Comparison over uniform surface conditions

    DOE PAGES

    Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...

    2015-10-24

    Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less

  9. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  10. Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Benford, Andrew; Tinker, Michael L.

    2004-01-01

    The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.

  11. Phased Array Ultrasound: Initial Development of PAUT Inspection of Self-Reacting Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Rairigh, Ryan

    2008-01-01

    This slide presentation reviews the development of Phased Array Ultrasound (PAUT) as a non-destructive examination method for Self Reacting Friction Stir Welds (SR-FSW). PAUT is the only NDE method which has been shown to detect detrimental levels of Residual Oxide Defect (ROD), which can result in significant decrease in weld strength. The presentation reviews the PAUT process, and shows the results in comparison with x-ray radiography.

  12. Comparison of wind tunnel test results at free stream Mach 0.7 with results from the Boeing TEA-230 subsonic flow method. [wing flow method tests

    NASA Technical Reports Server (NTRS)

    Mohn, L. W.

    1975-01-01

    The use of the Boeing TEA-230 Subsonic Flow Analysis method as a primary design tool in the development of cruise overwing nacelle configurations is presented. Surface pressure characteristics at 0.7 Mach number were determined by the TEA-230 method for a selected overwing flow-through nacelle configuration. Results of this analysis show excellent overall agreement with corresponding wind tunnel data. Effects of the presence of the nacelle on the wing pressure field were predicted accurately by the theoretical method. Evidence is provided that differences between theoretical and experimental pressure distributions in the present study would not result in significant discrepancies in the nacelle lines or nacelle drag estimates.

  13. Comparison of Signal Response Between EDM Notch and Cracks in Eddy-Current Testing

    NASA Technical Reports Server (NTRS)

    Kane, Mary; Koshti, Ajay

    2008-01-01

    In the field of ET an eddy-current instrument is calibrated on a manufactured notch that is designed to simulate a defect in a part. The calibrated instrument is then used to scan parts with the assumption that any response that is over half the amplitude of the notch signal is taken to be defective. The purpose of this study is to attempt a direct comparison of the signal response observed from an EDM notch to a crack of the same size. To make this comparison test equipment will be set up and calibrated as per normal inspection procedures. Once this has been achieved both notches and as many different sizes of crack specimens will be scanned and the data recorded. This data will then be analyzed to provide a comparison of the response. The results should also provide information that shows it is acceptable to use the half-amplitude method for determining if a part is defective. The tests will be performed on two different materials commonly inspected, titanium and aluminum. This will allow a comparison of the results between materials.

  14. Calibration of BCR-ABL1 mRNA quantification methods using genetic reference materials is a valid strategy to report results on the international scale.

    PubMed

    Mauté, Carole; Nibourel, Olivier; Réa, Delphine; Coiteux, Valérie; Grardel, Nathalie; Preudhomme, Claude; Cayuela, Jean-Michel

    2014-09-01

    Until recently, diagnostic laboratories that wanted to report on the international scale had limited options: they had to align their BCR-ABL1 quantification methods through a sample exchange with a reference laboratory to derive a conversion factor. However, commercial methods calibrated on the World Health Organization genetic reference panel are now available. We report results from a study designed to assess the comparability of the two alignment strategies. Sixty follow-up samples from chronic myeloid leukemia patients were included. Two commercial methods calibrated on the genetic reference panel were compared to two conversion factor methods routinely used at Saint-Louis Hospital, Paris, and at Lille University Hospital. Results were matched against concordance criteria (i.e., obtaining at least two of the three following landmarks: 50, 75 and 90% of the patient samples within a 2-fold, 3-fold and 5-fold range, respectively). Out of the 60 samples, more than 32 were available for comparison. Compared to the conversion factor method, the two commercial methods were within a 2-fold, 3-fold and 5-fold range for 53 and 59%, 89 and 88%, 100 and 97%, respectively of the samples analyzed at Saint-Louis. At Lille, results were 45 and 85%, 76 and 97%, 100 and 100%, respectively. Agreements between methods were observed in the four comparisons performed. Our data show that the two commercial methods selected are concordant with the conversion factor methods. This study brings the proof of principle that alignment on the international scale using the genetic reference panel is compatible with the patient sample exchange procedure. We believe that these results are particularly important for diagnostic laboratories wishing to adopt commercial methods. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  15. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less

  16. Support vector regression and artificial neural network models for stability indicating analysis of mebeverine hydrochloride and sulpiride mixtures in pharmaceutical preparation: A comparative study

    NASA Astrophysics Data System (ADS)

    Naguib, Ibrahim A.; Darwish, Hany W.

    2012-02-01

    A comparison between support vector regression (SVR) and Artificial Neural Networks (ANNs) multivariate regression methods is established showing the underlying algorithm for each and making a comparison between them to indicate the inherent advantages and limitations. In this paper we compare SVR to ANN with and without variable selection procedure (genetic algorithm (GA)). To project the comparison in a sensible way, the methods are used for the stability indicating quantitative analysis of mixtures of mebeverine hydrochloride and sulpiride in binary mixtures as a case study in presence of their reported impurities and degradation products (summing up to 6 components) in raw materials and pharmaceutical dosage form via handling the UV spectral data. For proper analysis, a 6 factor 5 level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. An independent test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. The proposed methods (linear SVR (without GA) and linear GA-ANN) were successfully applied to the analysis of pharmaceutical tablets containing mebeverine hydrochloride and sulpiride mixtures. The results manifest the problem of nonlinearity and how models like the SVR and ANN can handle it. The methods indicate the ability of the mentioned multivariate calibration models to deconvolute the highly overlapped UV spectra of the 6 components' mixtures, yet using cheap and easy to handle instruments like the UV spectrophotometer.

  17. Comparison of optomagnetic and AC susceptibility readouts in a magnetic nanoparticle agglutination assay for detection of C-reactive protein.

    PubMed

    Fock, Jeppe; Parmvi, Mattias; Strömberg, Mattias; Svedlindh, Peter; Donolato, Marco; Hansen, Mikkel Fougt

    2017-02-15

    There is an increasing need to develop biosensor methods that are highly sensitive and that can be combined with low-cost consumables. The use of magnetic nanoparticles (MNPs) is attractive because their detection is compatible with low-cost disposables and because application of a magnetic field can be used to accelerate assay kinetics. We present the first study and comparison of the performance of magnetic susceptibility measurements and a newly proposed optomagnetic method. For the comparison we use the C-reactive protein (CRP) induced agglutination of identical samples of 100nm MNPs conjugated with CRP antibodies. Both methods detect agglutination as a shift to lower frequencies in measurements of the dynamics in response to an applied oscillating magnetic field. The magnetic susceptibility method probes the magnetic response whereas the optomagnetic technique probes the modulation of laser light transmitted through the sample. The two techniques provided highly correlated results upon agglutination when they measure the decrease of the signal from the individual MNPs (turn-off detection strategy), whereas the techniques provided different results, strongly depending on the read-out frequency, when detecting the signal due to MNP agglomerates (turn-on detection strategy). These observations are considered to be caused by differences in the volume-dependence of the magnetic and optical signals from agglomerates. The highest signal from agglomerates was found in the optomagnetic signal at low frequencies. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Evaluation of dissolution profile similarity - Comparison between the f2, the multivariate statistical distance and the f2 bootstrapping methods.

    PubMed

    Paixão, Paulo; Gouveia, Luís F; Silva, Nuno; Morais, José A G

    2017-03-01

    A simulation study is presented, evaluating the performance of the f 2 , the model-independent multivariate statistical distance and the f 2 bootstrap methods in the ability to conclude similarity between two dissolution profiles. Different dissolution profiles, based on the Noyes-Whitney equation and ranging from theoretical f 2 values between 100 and 40, were simulated. Variability was introduced in the dissolution model parameters in an increasing order, ranging from a situation complying with the European guidelines requirements for the use of the f 2 metric to several situations where the f 2 metric could not be used anymore. Results have shown that the f 2 is an acceptable metric when used according to the regulatory requirements, but loses its applicability when variability increases. The multivariate statistical distance presented contradictory results in several of the simulation scenarios, which makes it an unreliable metric for dissolution profile comparisons. The bootstrap f 2 , although conservative in its conclusions is an alternative suitable method. Overall, as variability increases, all of the discussed methods reveal problems that can only be solved by increasing the number of dosage form units used in the comparison, which is usually not practical or feasible. Additionally, experimental corrective measures may be undertaken in order to reduce the overall variability, particularly when it is shown that it is mainly due to the dissolution assessment instead of being intrinsic to the dosage form. Copyright © 2016. Published by Elsevier B.V.

  19. [Comparison of three methods for measuring multiple morbidity according to the use of health resources in primary healthcare].

    PubMed

    Sicras-Mainar, Antoni; Velasco-Velasco, Soledad; Navarro-Artieda, Ruth; Blanca Tamayo, Milagrosa; Aguado Jodar, Alba; Ruíz Torrejón, Amador; Prados-Torres, Alexandra; Violan-Fors, Concepción

    2012-06-01

    To compare three methods of measuring multiple morbidity according to the use of health resources (cost of care) in primary healthcare (PHC). Retrospective study using computerized medical records. Thirteen PHC teams in Catalonia (Spain). Assigned patients requiring care in 2008. The socio-demographic variables were co-morbidity and costs. Methods of comparison were: a) Combined Comorbidity Index (CCI): an index itself was developed from the scores of acute and chronic episodes, b) Charlson Index (ChI), and c) Adjusted Clinical Groups case-mix: resource use bands (RUB). The cost model was constructed by differentiating between fixed (operational) and variable costs. 3 multiple lineal regression models were developed to assess the explanatory power of each measurement of co-morbidity which were compared from the determination coefficient (R(2)), p< .05. The study included 227,235 patients. The mean unit of cost was €654.2. The CCI explained an R(2)=50.4%, the ChI an R(2)=29.2% and BUR an R(2)=39.7% of the variability of the cost. The behaviour of the ICC is acceptable, albeit with low scores (1 to 3 points), showing inconclusive results. The CCI may be a simple method of predicting PHC costs in routine clinical practice. If confirmed, these results will allow improvements in the comparison of the case-mix. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  20. A comparison of three fiber tract delineation methods and their impact on white matter analysis.

    PubMed

    Sydnor, Valerie J; Rivas-Grajales, Ana María; Lyall, Amanda E; Zhang, Fan; Bouix, Sylvain; Karmacharya, Sarina; Shenton, Martha E; Westin, Carl-Fredrik; Makris, Nikos; Wassermann, Demian; O'Donnell, Lauren J; Kubicki, Marek

    2018-05-19

    Diffusion magnetic resonance imaging (dMRI) is an important method for studying white matter connectivity in the brain in vivo in both healthy and clinical populations. Improvements in dMRI tractography algorithms, which reconstruct macroscopic three-dimensional white matter fiber pathways, have allowed for methodological advances in the study of white matter; however, insufficient attention has been paid to comparing post-tractography methods that extract white matter fiber tracts of interest from whole-brain tractography. Here we conduct a comparison of three representative and conceptually distinct approaches to fiber tract delineation: 1) a manual multiple region of interest-based approach, 2) an atlas-based approach, and 3) a groupwise fiber clustering approach, by employing methods that exemplify these approaches to delineate the arcuate fasciculus, the middle longitudinal fasciculus, and the uncinate fasciculus in 10 healthy male subjects. We enable qualitative comparisons across methods, conduct quantitative evaluations of tract volume, tract length, mean fractional anisotropy, and true positive and true negative rates, and report measures of intra-method and inter-method agreement. We discuss methodological similarities and differences between the three approaches and the major advantages and drawbacks of each, and review research and clinical contexts for which each method may be most apposite. Emphasis is given to the means by which different white matter fiber tract delineation approaches may systematically produce variable results, despite utilizing the same input tractography and reliance on similar anatomical knowledge. Copyright © 2018. Published by Elsevier Inc.

  1. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    PubMed

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  2. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images.

    PubMed

    Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli

    2010-05-13

    Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.

  3. Comparison of modal test results - Multipoint sine versus single-point random. [for Mariner Jupiter/Saturn spacecraft

    NASA Technical Reports Server (NTRS)

    Leppert, E. L.; Lee, S. H.; Day, F. D.; Chapman, P. C.; Wada, B. K.

    1976-01-01

    The Mariner Jupiter/Saturn (MJS) spacecraft was subjected to the traditional multipoint sine dwell (MPSD) modal test using 111 accelerometer channels, and also to single-point random (SPR) testing using 26 accelerometer channels, and the two methods are compared according to cost, schedule, and technical criteria. A measure of comparison between the systems was devised in terms of the cumulative difference in the kinetic energy distribution of the common accelerometers. The SPR and MPSD method show acceptable agreement with respect to frequencies and mode damping. The merit of the SPR method is that the excitation points are minimized and the test article can be committed to other uses while data analysis is performed. The MPSD approach allows validity of the data to be determined as the test progresses. Costs are about the same for the two methods.

  4. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  5. Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions

    NASA Astrophysics Data System (ADS)

    McGrath-Spangler, E. L.; Molod, A.

    2014-07-01

    Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.

  6. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  7. Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.

    2017-01-01

    With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.

  8. Aircraft Dynamic Modeling in Turbulence

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunninham, Kevin

    2012-01-01

    A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.

  9. Comparison of solvent/derivatization agent systems for determination of extractable toluene diisocyanate from flexible polyurethane foam.

    PubMed

    Vangronsveld, Erik; Berckmans, Steven; Spence, Mark

    2013-06-01

    Flexible polyurethane foam (FPF) is produced from the reaction of toluene diisocyanate (TDI) and polyols. Limited and conflicting results exist in the literature concerning the presence of unreacted TDI remaining in FPF as determined by various solvent extraction and analysis techniques. This study reports investigations into the effect of several solvent/derivatization agent combinations on extractable TDI results and suggests a preferred method. The suggested preferred method employs a syringe-based multiple extraction of foam samples with a toluene solution of 1-(2-methoxyphenyl)-piperazine. Extracts are analyzed by liquid chromatography using an ion trap mass spectrometry detection technique. Detection limits of the method are ~10ng TDI g(-1) foam (10 ppb, w/w) for each TDI isomer (i.e. 2,4-TDI and 2,6-TDI). The method was evaluated by a three-laboratory interlaboratory comparison using two representative foam samples. The total extractable TDI results found by the three labs for the two foams were in good agreement (relative standard deviation of the mean of 30-40%). The method has utility as a basis for comparing FPFs, but the interpretation of extractable TDI results using any solvent as the true value for 'free' or 'unreacted' TDI in the foam is problematic, as demonstrated by the difference in the extracted TDI results from the different extraction systems studied. Further, a consideration of polyurethane foam chemistry raises the possibility that extractable TDI may result from decomposition of parts of the foam structure (e.g. dimers, biurets, and allophanates) by the extraction system.

  10. Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood Interventions.

    PubMed

    Thomas, Jaime; Avellar, Sarah A; Deke, John; Gleason, Philip

    2017-06-01

    Systematic reviews assess the quality of research on program effectiveness to help decision makers faced with many intervention options. Study quality standards specify criteria that studies must meet, including accounting for baseline differences between intervention and comparison groups. We explore two issues related to systematic review standards: covariate choice and choice of estimation method. To help systematic reviews develop/refine quality standards and support researchers in using nonexperimental designs to estimate program effects, we address two questions: (1) How well do variables that systematic reviews typically require studies to account for explain variation in key child and family outcomes? (2) What methods should studies use to account for preexisting differences between intervention and comparison groups? We examined correlations between baseline characteristics and key outcomes using Early Childhood Longitudinal Study-Birth Cohort data to address Question 1. For Question 2, we used simulations to compare two methods-matching and regression adjustment-to account for preexisting differences between intervention and comparison groups. A broad range of potential baseline variables explained relatively little of the variation in child and family outcomes. This suggests the potential for bias even after accounting for these variables, highlighting the need for systematic reviews to provide appropriate cautions about interpreting the results of moderately rated, nonexperimental studies. Our simulations showed that regression adjustment can yield unbiased estimates if all relevant covariates are used, even when the model is misspecified, and preexisting differences between the intervention and the comparison groups exist.

  11. Ultrafast Comparison of Personal Genomes via Precomputed Genome Fingerprints.

    PubMed

    Glusman, Gustavo; Mauldin, Denise E; Hood, Leroy E; Robinson, Max

    2017-01-01

    We present an ultrafast method for comparing personal genomes. We transform the standard genome representation (lists of variants relative to a reference) into "genome fingerprints" via locality sensitive hashing. The resulting genome fingerprints can be meaningfully compared even when the input data were obtained using different sequencing technologies, processed using different pipelines, represented in different data formats and relative to different reference versions. Furthermore, genome fingerprints are robust to up to 30% missing data. Because of their reduced size, computation on the genome fingerprints is fast and requires little memory. For example, we could compute all-against-all pairwise comparisons among the 2504 genomes in the 1000 Genomes data set in 67 s at high quality (21 μs per comparison, on a single processor), and achieved a lower quality approximation in just 11 s. Efficient computation enables scaling up a variety of important genome analyses, including quantifying relatedness, recognizing duplicative sequenced genomes in a set, population reconstruction, and many others. The original genome representation cannot be reconstructed from its fingerprint, effectively decoupling genome comparison from genome interpretation; the method thus has significant implications for privacy-preserving genome analytics.

  12. Comparison of Theoretical Stresses and Deflections of Multicell Wings with Experimental Results Obtained from Plastic Models

    NASA Technical Reports Server (NTRS)

    Zender, George W

    1956-01-01

    The experimental deflections and stresses of six plastic multicell-wing models of unswept, delta, and swept plan form are presented and compared with previously published theoretical results obtained by the electrical analog method. The comparisons indicate that the theory is reliable except for the evaluation of stresses in the vicinity of the leading edge of delta wings and the leading and trailing edges of swept wings. The stresses in these regions are questionable, apparently because of simplifications employed in idealizing the actual structure for theoretical purposes and because of local effects of concentrated loads.

  13. Web-Based Training Methods for Behavioral Health Providers: A Systematic Review.

    PubMed

    Jackson, Carrie B; Quetsch, Lauren B; Brabson, Laurel A; Herschell, Amy D

    2018-07-01

    There has been an increase in the use of web-based training methods to train behavioral health providers in evidence-based practices. This systematic review focuses solely on the efficacy of web-based training methods for training behavioral health providers. A literature search yielded 45 articles meeting inclusion criteria. Results indicated that the serial instruction training method was the most commonly studied web-based training method. While the current review has several notable limitations, findings indicate that participating in a web-based training may result in greater post-training knowledge and skill, in comparison to baseline scores. Implications and recommendations for future research on web-based training methods are discussed.

  14. Transverse vibrations of non-uniform beams. [combined finite element and Rayleigh-Ritz methods

    NASA Technical Reports Server (NTRS)

    Klein, L.

    1974-01-01

    The free vibrations of elastic beams with nonuniform characteristics are investigated theoretically by a new method. The new method is seen to combine the advantages of a finite element approach and of a Rayleigh-Ritz analysis. Comparison with the known analytical results for uniform beams shows good convergence of the method for natural frequencies and modes. For internal shear forces and bending moments, the rate of convergence is less rapid. Results from experiments conducted with a cantilevered helicopter blade with strong nonuniformities and also from alternative theoretical methods, indicate that the theory adequately predicts natural frequencies and mode shapes. General guidelines for efficient use of the method are presented.

  15. Epistemic uncertainty propagation in energy flows between structural vibrating systems

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong

    2016-03-01

    A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.

  16. Evaluation of a simplified gross thrust calculation method for a J85-21 afterburning turbojet engine in an altitude facility

    NASA Technical Reports Server (NTRS)

    Baer-Riedhart, J. L.

    1982-01-01

    A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.

  17. A simple extension to the CMASA method for the prediction of catalytic residues in the presence of single point mutations.

    PubMed

    Flores, David I; Sotelo-Mundo, Rogerio R; Brizuela, Carlos A

    2014-01-01

    The automatic identification of catalytic residues still remains an important challenge in structural bioinformatics. Sequence-based methods are good alternatives when the query shares a high percentage of identity with a well-annotated enzyme. However, when the homology is not apparent, which occurs with many structures from the structural genome initiative, structural information should be exploited. A local structural comparison is preferred to a global structural comparison when predicting functional residues. CMASA is a recently proposed method for predicting catalytic residues based on a local structure comparison. The method achieves high accuracy and a high value for the Matthews correlation coefficient. However, point substitutions or a lack of relevant data strongly affect the performance of the method. In the present study, we propose a simple extension to the CMASA method to overcome this difficulty. Extensive computational experiments are shown as proof of concept instances, as well as for a few real cases. The results show that the extension performs well when the catalytic site contains mutated residues or when some residues are missing. The proposed modification could correctly predict the catalytic residues of a mutant thymidylate synthase, 1EVF. It also successfully predicted the catalytic residues for 3HRC despite the lack of information for a relevant side chain atom in the PDB file.

  18. Improving threading algorithms for remote homology modeling by combining fragment and template comparisons

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2010-01-01

    In this work, we develop a method called FTCOM for assessing the global quality of protein structural models for targets of medium and hard difficulty (remote homology) produced by structure prediction approaches such as threading or ab initio structure prediction. FTCOM requires the Cα coordinates of full length models and assesses model quality based on fragment comparison and a score derived from comparison of the model to top threading templates. On a set of 361 medium/hard targets, FTCOM was applied to and assessed for its ability to improve upon the results from the SP3, SPARKS, PROSPECTOR_3, and PRO-SP3-TASSER threading algorithms. The average TM-score improves by 5%–10% for the first selected model by the new method over models obtained by the original selection procedure in the respective threading methods. Moreover the number of foldable targets (TM-score ≥0.4) increases from least 7.6% for SP3 to 54% for SPARKS. Thus, FTCOM is a promising approach to template selection. PMID:20455261

  19. A new method for the determination of vaporization enthalpies of ionic liquids at low temperatures.

    PubMed

    Verevkin, Sergey P; Zaitsau, Dzmitry H; Emelyanenko, Vladimir N; Heintz, Andreas

    2011-11-10

    A new method for the determination of vaporization enthalpies of extremely low volatile ILs has been developed using a newly constructed quartz crystal microbalance (QCM) vacuum setup. Because of the very high sensitivity of the QCM it has been possible to reduce the average temperature of the vaporization studies by approximately 100 K in comparison to other conventional techniques. The physical basis of the evaluation procedure has been developed and test measurements have been performed with the common ionic liquid 1-ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide [C(2)mim][NTf(2)] extending the range of measuring vaporization enthalpies down to 363 K. The results obtained for [C(2)mim][NTf(2)] have been tested for thermodynamic consistency by comparison with data already available at higher temperatures. Comparison of the temperature-dependent vaporization enthalpy data taken from the literature show only acceptable agreement with the heat capacity difference of -40 J K(-1) mol(-1). The method developed in this work opens also a new way to obtain reliable values of vaporization enthalpies of thermally unstable ionic liquids.

  20. International comparison of observation-specific spatial buffers: maximizing the ability to estimate physical activity.

    PubMed

    Frank, Lawrence D; Fox, Eric H; Ulmer, Jared M; Chapman, James E; Kershaw, Suzanne E; Sallis, James F; Conway, Terry L; Cerin, Ester; Cain, Kelli L; Adams, Marc A; Smith, Graham R; Hinckson, Erica; Mavoa, Suzanne; Christiansen, Lars B; Hino, Adriano Akira F; Lopes, Adalberto A S; Schipperijn, Jasper

    2017-01-23

    Advancements in geographic information systems over the past two decades have increased the specificity by which an individual's neighborhood environment may be spatially defined for physical activity and health research. This study investigated how different types of street network buffering methods compared in measuring a set of commonly used built environment measures (BEMs) and tested their performance on associations with physical activity outcomes. An internationally-developed set of objective BEMs using three different spatial buffering techniques were used to evaluate the relative differences in resulting explanatory power on self-reported physical activity outcomes. BEMs were developed in five countries using 'sausage,' 'detailed-trimmed,' and 'detailed,' network buffers at a distance of 1 km around participant household addresses (n = 5883). BEM values were significantly different (p < 0.05) for 96% of sausage versus detailed-trimmed buffer comparisons and 89% of sausage versus detailed network buffer comparisons. Results showed that BEM coefficients in physical activity models did not differ significantly across buffering methods, and in most cases BEM associations with physical activity outcomes had the same level of statistical significance across buffer types. However, BEM coefficients differed in significance for 9% of the sausage versus detailed models, which may warrant further investigation. Results of this study inform the selection of spatial buffering methods to estimate physical activity outcomes using an internationally consistent set of BEMs. Using three different network-based buffering methods, the findings indicate significant variation among BEM values, however associations with physical activity outcomes were similar across each buffering technique. The study advances knowledge by presenting consistently assessed relationships between three different network buffer types and utilitarian travel, sedentary behavior, and leisure-oriented physical activity outcomes.

Top