Sample records for comparative method results

  1. Analysis of random structure-acoustic interaction problems using coupled boundary element and finite element methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Pates, Carl S., III

    1994-01-01

    A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.

  2. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    NASA Astrophysics Data System (ADS)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  3. Sport fishing: a comparison of three indirect methods for estimating benefits.

    Treesearch

    Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight

    1988-01-01

    Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...

  4. Comparability of river suspended-sediment sampling and laboratory analysis methods

    USGS Publications Warehouse

    Groten, Joel T.; Johnson, Gregory D.

    2018-03-06

    Accurate measurements of suspended sediment, a leading water-quality impairment in many Minnesota rivers, are important for managing and protecting water resources; however, water-quality standards for suspended sediment in Minnesota are based on grab field sampling and total suspended solids (TSS) laboratory analysis methods that have underrepresented concentrations of suspended sediment in rivers compared to U.S. Geological Survey equal-width-increment or equal-discharge-increment (EWDI) field sampling and suspended sediment concentration (SSC) laboratory analysis methods. Because of this underrepresentation, the U.S. Geological Survey, in collaboration with the Minnesota Pollution Control Agency, collected concurrent grab and EWDI samples at eight sites to compare results obtained using different combinations of field sampling and laboratory analysis methods.Study results determined that grab field sampling and TSS laboratory analysis results were biased substantially low compared to EWDI sampling and SSC laboratory analysis results, respectively. Differences in both field sampling and laboratory analysis methods caused grab and TSS methods to be biased substantially low. The difference in laboratory analysis methods was slightly greater than field sampling methods.Sand-sized particles had a strong effect on the comparability of the field sampling and laboratory analysis methods. These results indicated that grab field sampling and TSS laboratory analysis methods fail to capture most of the sand being transported by the stream. The results indicate there is less of a difference among samples collected with grab field sampling and analyzed for TSS and concentration of fines in SSC. Even though differences are present, the presence of strong correlations between SSC and TSS concentrations provides the opportunity to develop site specific relations to address transport processes not captured by grab field sampling and TSS laboratory analysis methods.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, T.F.; Thorne, P.G.; Myers, K.F.

    Salting-out solvent extraction (SOE) was compared with cartridge and membrane solid-phase extraction (SPE) for preconcentration of nitroaromatics, nitramines, and aminonitroaromatics prior to determination by reversed-phase high-performance liquid chromatography. The solid phases used were manufacturer-cleaned materials, Porapak RDX for the cartridge method and Empore SDB-RPS for the membrane method. Thirty-three groundwater samples from the Naval Surface Warfare Center, Crane, Indiana, were analyzed using the direct analysis protocol specified in SW846 Method 8330, and the results were compared with analyses conducted after preconcentration using SOE with acetonitrile, cartridge-based SPE, and membrane-based SPE. For high-concentration samples, analytical results from the three preconcentration techniquesmore » were compared with results from the direct analysis protocol; good recovery of all target analytes was achieved by all three pre-concentration methods. For low-concentration samples, results from the two SPE methods were correlated with results from the SOE method; very similar data was obtained by the SOE and SPE methods, even at concentrations well below 1 microgram/L.« less

  6. Evaluation of a rapid method for the detection of streptococcal group A antigen directly from throat swabs.

    PubMed Central

    Venezia, R A; Ryan, A; Alward, S; Kostun, W A

    1985-01-01

    Throat swabs from 196 pediatric patients were processed by a direct extraction-latex agglutination method (Group A Strep Direct Antigen Identification Test [DAI]) that detects group A streptococci in the specimen. The method requires a 45-min enzymatic extraction period at 37 degrees C and a 4-min reaction period with antibody-linked latex particles. The results were compared with those of the culture and fluorescent antibody methods and the clinical presentation of the patient for pharyngitis. Ninety-three percent of the specimens resulted in agreement by all tests, and 28% were culture positive for group A streptococci. Compared with the culture method, the DAI had a sensitivity and a specificity of 83% and 99%, respectively. The positive predictive values were 98% versus the culture method and 93% versus the fluorescent antibody method, whereas the negative predictive values were 94% versus both other methods. Of the 14 discrepant results when both clinical presentation of an acute pharyngitis and the test results were compared, the culture method provided the best correlation. An additional 64 specimens were processed by the DAI and another direct extraction-latex agglutination method (Culturette Ten-Minute Group A Strep ID Test), and the results were compared with those of the culture method. This group had a 40.6% culture isolation rate for group A streptococci. The sensitivity and specificity of the DAI and Strep ID methods versus the culture method were 81 and 100%, and 77 and 97%, respectively. These results indicate that the DAI is accurate for diagnosing group A streptococcal pharyngitis directly from throat swabs. However, negative results in the presence of a symptomatic patient must be confirmed by standard culture techniques. PMID:3884656

  7. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  8. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    PubMed

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-07

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Exploration of Analysis Methods for Diagnostic Imaging Tests: Problems with ROC AUC and Confidence Scores in CT Colonography

    PubMed Central

    Mallett, Susan; Halligan, Steve; Collins, Gary S.; Altman, Doug G.

    2014-01-01

    Background Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. Methods In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Results Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. Conclusions The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests. PMID:25353643

  10. Quantifying the quality of medical x-ray images: An evaluation based on normal anatomy for lumbar spine and chest radiography

    NASA Astrophysics Data System (ADS)

    Tingberg, Anders Martin

    Optimisation in diagnostic radiology requires accurate methods for determination of patient absorbed dose and clinical image quality. Simple methods for evaluation of clinical image quality are at present scarce and this project aims at developing such methods. Two methods are used and further developed; fulfillment of image criteria (IC) and visual grading analysis (VGA). Clinical image quality descriptors are defined based on these two methods: image criteria score (ICS) and visual grading analysis score (VGAS), respectively. For both methods the basis is the Image Criteria of the ``European Guidelines on Quality Criteria for Diagnostic Radiographic Images''. Both methods have proved to be useful for evaluation of clinical image quality. The two methods complement each other: IC is an absolute method, which means that the quality of images of different patients and produced with different radiographic techniques can be compared with each other. The separating power of IC is, however, weaker than that of VGA. VGA is the best method for comparing images produced with different radiographic techniques and has strong separating power, but the results are relative, since the quality of an image is compared to the quality of a reference image. The usefulness of the two methods has been verified by comparing the results from both of them with results from a generally accepted method for evaluation of clinical image quality, receiver operating characteristics (ROC). The results of the comparison between the two methods based on visibility of anatomical structures and the method based on detection of pathological structures (free-response forced error) indicate that the former two methods can be used for evaluation of clinical image quality as efficiently as the method based on ROC. More studies are, however, needed for us to be able to draw a general conclusion, including studies of other organs, using other radiographic techniques, etc. The results of the experimental evaluation of clinical image quality are compared with physical quantities calculated with a theoretical model based on a voxel phantom, and correlations are found. The results demonstrate that the computer model can be a useful toot in planning further experimental studies.

  11. [A comparative evaluation of the methods for determining nitrogen dioxide in an industrial environment].

    PubMed

    Panev, T

    1991-01-01

    The present work has the purpose to make a comparative evaluation of the different types detector tubes--for analysis, long-term and passive for determination of NO2 and the results to be compared, with those received by the spectrophotometric method and the reagent of Zaltsman. Studies were performed in the hall of the garage for repair of diesel buses during one working shift. The results point out that the analysing tubes for NO2 give good results with the spectrophotometric method. The measured average-shift concentrations of NO2 by long-term and passive tubes are juxtaposed with the average-received values with the analytical tubes and with the analytical method.

  12. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    ERIC Educational Resources Information Center

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  13. Confidence-based ensemble for GBM brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew

    2011-03-01

    It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.

  14. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    NASA Astrophysics Data System (ADS)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  15. Virtual screening of cocrystal formers for CL-20

    NASA Astrophysics Data System (ADS)

    Zhou, Jun-Hong; Chen, Min-Bo; Chen, Wei-Ming; Shi, Liang-Wei; Zhang, Chao-Yang; Li, Hong-Zhen

    2014-08-01

    According to the structure characteristics of 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20) and the kinetic mechanism of the cocrystal formation, the method of virtual screening CL-20 cocrystal formers by the criterion of the strongest intermolecular site pairing energy (ISPE) was proposed. In this method the strongest ISPE was thought to determine the first step of the cocrystal formation. The prediction results for four sets of common drug molecule cocrystals by this method were compared with those by the total ISPE method from the reference (Musumeci et al., 2011), and the experimental results. This method was then applied to virtually screen the CL-20 cocrystal formers, and the prediction results were compared with the experimental results.

  16. Morbidity and chronic pain following different techniques of caesarean section: A comparative study.

    PubMed

    Belci, D; Di Renzo, G C; Stark, M; Đurić, J; Zoričić, D; Belci, M; Peteh, L L

    2015-01-01

    Research examining long-term outcomes after childbirth performed with different techniques of caesarean section have been limited and do not provide information on morbidity and neuropathic pain. The study compares two groups of patients submitted to the 'Traditional' method using Pfannenstiel incision and patients submitted to the 'Misgav Ladach' method ≥ 5 years after the operation. We find better long-term postoperative results in the patients that were treated with the Misgav Ladach method compared with the Traditional method. The results were statistically better regarding the intensity of pain, presence of neuropathic and chronic pain and the level of satisfaction about cosmetic appearance of the scar.

  17. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    PubMed

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  18. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    PubMed

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition when compared to the picture-matching intervention. Benefits to students and the practicality of employing the word-level teaching methods are discussed.

  19. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  20. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  1. Highly Effective DNA Extraction Method for Nuclear Short Tandem Repeat Testing of Skeletal Remains from Mass Graves

    PubMed Central

    Davoren, Jon; Vanek, Daniel; Konjhodzić, Rijad; Crews, John; Huffine, Edwin; Parsons, Thomas J.

    2007-01-01

    Aim To quantitatively compare a silica extraction method with a commonly used phenol/chloroform extraction method for DNA analysis of specimens exhumed from mass graves. Methods DNA was extracted from twenty randomly chosen femur samples, using the International Commission on Missing Persons (ICMP) silica method, based on Qiagen Blood Maxi Kit, and compared with the DNA extracted by the standard phenol/chloroform-based method. The efficacy of extraction methods was compared by real time polymerase chain reaction (PCR) to measure DNA quantity and the presence of inhibitors and by amplification with the PowerPlex 16 (PP16) multiplex nuclear short tandem repeat (STR) kit. Results DNA quantification results showed that the silica-based method extracted on average 1.94 ng of DNA per gram of bone (range 0.25-9.58 ng/g), compared with only 0.68 ng/g by the organic method extracted (range 0.0016-4.4880 ng/g). Inhibition tests showed that there were on average significantly lower levels of PCR inhibitors in DNA isolated by the organic method. When amplified with PP16, all samples extracted by silica-based method produced 16 full loci profiles, while only 75% of the DNA extracts obtained by organic technique amplified 16 loci profiles. Conclusions The silica-based extraction method showed better results in nuclear STR typing from degraded bone samples than a commonly used phenol/chloroform method. PMID:17696302

  2. Simple and fast polydimethylsiloxane (PDMS) patterning using a cutting plotter and vinyl adhesives to achieve etching results.

    PubMed

    Hyun Kim; Sun-Young Yoo; Ji Sung Kim; Zihuan Wang; Woon Hee Lee; Kyo-In Koo; Jong-Mo Seo; Dong-Il Cho

    2017-07-01

    Inhibition of polydimethylsiloxane (PDMS) polymerization could be observed when spin-coated over vinyl substrates. The degree of polymerization, partially curing or fully curing, depended on the PDMS thickness coated over the vinyl substrate. This characteristic was exploited to achieve simple and fast PDMS patterning method using a vinyl adhesive layer patterned through a cutting plotter. The proposed patterning method showed results resembling PDMS etching. Therefore, patterning PDMS over PDMS, glass, silicon, and gold substrates were tested to compare the results with conventional etching methods. Vinyl stencils with widths ranging from 200μm to 1500μm were used for the procedure. To evaluate the accuracy of the cutting plotter, stencil designed on the AutoCAD software and the actual stencil widths were compared. Furthermore, this method's accuracy was also evaluated by comparing the widths of the actual stencils and etched PDMS results.

  3. Comparison of nine brands of membrane filter and the most-probable-number methods for total coliform enumeration in sewage-contaminated drinking water.

    PubMed Central

    Tobin, R S; Lomax, P; Kushner, D J

    1980-01-01

    Nine different brands of membrane filter were compared in the membrane filtration (MF) method, and those with the highest yields were compared against the most-probable-number (MPN) multiple-tube method for total coliform enumeration in simulated sewage-contaminated tap water. The water was chlorinated for 30 min to subject the organisms to stresses similar to those encountered during treatment and distribution of drinking water. Significant differences were observed among membranes in four of the six experiments, with two- to four-times-higher recoveries between the membranes at each extreme of recovery. When results from the membranes with the highest total coliform recovery rate were compared with the MPN results, the MF results were found significantly higher in one experiment and equivalent to the MPN results in the other five experiments. A comparison was made of the species enumerated by these methods; in general the two methods enumerated a similar spectrum of organisms, with some indication that the MF method was subject to greater interference by Aeromonas. PMID:7469407

  4. COMPARE : a method for analyzing investment alternatives in industrial wood and bark energy systems

    Treesearch

    Peter J. Ince

    1983-01-01

    COMPARE is a FORTRAN computer program resulting from a study to develop methods for comparative economic analysis of alternatives in industrial wood and bark energy systems. COMPARE provides complete guidelines for economic analysis of wood and bark energy systems. As such, COMPARE can be useful to those who have only basic familiarity with investment analysis of wood...

  5. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  6. Lipidomic analysis of biological samples: Comparison of liquid chromatography, supercritical fluid chromatography and direct infusion mass spectrometry methods.

    PubMed

    Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal

    2017-11-24

    Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  8. Semiquantitative determination of mesophilic, aerobic microorganisms in cocoa products using the Soleris NF-TVC method.

    PubMed

    Montei, Carolyn; McDougal, Susan; Mozola, Mark; Rice, Jennifer

    2014-01-01

    The Soleris Non-fermenting Total Viable Count method was previously validated for a wide variety of food products, including cocoa powder. A matrix extension study was conducted to validate the method for use with cocoa butter and cocoa liquor. Test samples included naturally contaminated cocoa liquor and cocoa butter inoculated with natural microbial flora derived from cocoa liquor. A probability of detection statistical model was used to compare Soleris results at multiple test thresholds (dilutions) with aerobic plate counts determined using the AOAC Official Method 966.23 dilution plating method. Results of the two methods were not statistically different at any dilution level in any of the three trials conducted. The Soleris method offers the advantage of results within 24 h, compared to the 48 h required by standard dilution plating methods.

  9. The combination of the error correction methods of GAFCHROMIC EBT3 film

    PubMed Central

    Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei

    2017-01-01

    Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023

  10. Elongation measurement using 1-dimensional image correlation method

    NASA Astrophysics Data System (ADS)

    Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan

    2016-11-01

    Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.

  11. Teaching Business Simulation Games: Comparing Achievements Frontal Teaching vs. eLearning

    NASA Astrophysics Data System (ADS)

    Bregman, David; Keinan, Gila; Korman, Arik; Raanan, Yossi

    This paper addresses the issue of comparing results achieved by students taught the same course but in two drastically different - a regular, frontal method and an eLearning method. The subject taught required intensive communications among the students, thus making the eLearning students, a priori, less likely to do well in it. The research, comparing the achievements of students in a business simulation game over three semesters, shows that the use of eLearning method did not result in any differences in performance, grades or cooperation, thus strengthening the case for using eLearning in this type of course.

  12. Region-based multi-step optic disk and cup segmentation from color fundus image

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Lock, Jane; Manresa, Javier Moreno; Vignarajan, Janardhan; Tay-Kearney, Mei-Ling; Kanagasingam, Yogesan

    2013-02-01

    Retinal optic cup-disk-ratio (CDR) is a one of important indicators of glaucomatous neuropathy. In this paper, we propose a novel multi-step 4-quadrant thresholding method for optic disk segmentation and a multi-step temporal-nasal segmenting method for optic cup segmentation based on blood vessel inpainted HSL lightness images and green images. The performance of the proposed methods was evaluated on a group of color fundus images and compared with the manual outlining results from two experts. Dice scores of detected disk and cup regions between the auto and manual results were computed and compared. Vertical CDRs were also compared among the three results. The preliminary experiment has demonstrated the robustness of the method for automatic optic disk and cup segmentation and its potential value for clinical application.

  13. A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.

    PubMed

    Doroudi, Alireza

    2012-01-01

    In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.

  14. A Comparison of Computational Aeroacoustic Prediction Methods for Transonic Rotor Noise

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Lyrintzis, Anastasios; Koutsavdis, Evangelos K.

    1996-01-01

    This paper compares two methods for predicting transonic rotor noise for helicopters in hover and forward flight. Both methods rely on a computational fluid dynamics (CFD) solution as input to predict the acoustic near and far fields. For this work, the same full-potential rotor code has been used to compute the CFD solution for both acoustic methods. The first method employs the acoustic analogy as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, including the quadrupole term. The second method uses a rotating Kirchhoff formulation. Computed results from both methods are compared with one other and with experimental data for both hover and advancing rotor cases. The results are quite good for all cases tested. The sensitivity of both methods to CFD grid resolution and to the choice of the integration surface/volume is investigated. The computational requirements of both methods are comparable; in both cases these requirements are much less than the requirements for the CFD solution.

  15. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    NASA Astrophysics Data System (ADS)

    Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.

    2015-07-01

    In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP) by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed yielding an explict cluster attribution for each particle, improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  16. Comparative study of performance of neutral axis tracking based damage detection

    NASA Astrophysics Data System (ADS)

    Soman, R.; Malinowski, P.; Ostachowicz, W.

    2015-07-01

    This paper presents a comparative study of a novel SHM technique for damage isolation. The performance of the Neutral Axis (NA) tracking based damage detection strategy is compared to other popularly used vibration based damage detection methods viz. ECOMAC, Mode Shape Curvature Method and Strain Flexibility Index Method. The sensitivity of the novel method is compared under changing ambient temperature conditions and in the presence of measurement noise. Finite Element Analysis (FEA) of the DTU 10 MW Wind Turbine was conducted to compare the local damage identification capability of each method and the results are presented. Under the conditions examined, the proposed method was found to be robust to ambient condition changes and measurement noise. The damage identification in some is either at par with the methods mentioned in the literature or better under the investigated damage scenarios.

  17. Noise robustness of a combined phase retrieval and reconstruction method for phase-contrast tomography.

    PubMed

    Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian

    2016-04-01

    Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.

  18. Input respiratory impedance in mice: comparison between the flow-based and the wavetube method to perform the forced oscillation technique.

    PubMed

    Mori, V; Oliveira, M A; Vargas, M H M; da Cunha, A A; de Souza, R G; Pitrez, P M; Moriya, H T

    2017-06-01

    Objective and approach: In this study, we estimated the constant phase model (CPM) parameters from the respiratory impedance of male BALB/c mice by performing the forced oscillation technique (FOT) in a control group (n  =  8) and in a murine model of asthma (OVA) (n  =  10). Then, we compared the results obtained by two different methods, using a commercial equipment (flexiVent-flexiWare 7.X; SCIREQ, Montreal, Canada) (FXV) and a wavetube method equipment (Sly et al 2003 J. Appl. Physiol. 94 1460-6) (WVT). We believe that the results from different methods may not be comparable. First, we compared the results performing a two-way analysis of variance (ANOVA) for the resistance, elastance and tissue damping. We found statistically significant differences in all CPM parameters, except for resistance, when comparing Control and OVA groups. When comparing devices, we found statistically significant differences in resistance, while differences in elastance were not observed. For tissue damping, the results from WVT were observed to be higher than those from FXV. Finally, when comparing the relative variation between the CPM parameters of the Control and OVA groups in both devices, no significant differences were observed for all parameters. We then conclude that this assessment can compensate the effect of using different cannulas. Furthermore, tissue damping differences between groups can be compensated, since bronchoconstrictors were not used. Therefore, we believe that relative variations in the results between groups can be a comparing parameter when using different equipment without bronchoconstrictor administration.

  19. A Rational Method for Ranking Engineering Programs.

    ERIC Educational Resources Information Center

    Glower, Donald D.

    1980-01-01

    Compares two methods for ranking academic programs, the opinion poll v examination of career successes of the program's alumni. For the latter, "Who's Who in Engineering" and levels of research funding provided data. Tables display resulting data and compare rankings by the two methods for chemical engineering and civil engineering. (CS)

  20. WOODSTOVE EMISSION SAMPLING METHODS COMPARABILITY ANALYSIS AND IN-SITU EVALUATION OF NEW TECHNOLOGY WOODSTOVES

    EPA Science Inventory

    This report compares simultaneous results from three woodstove sampling methods and evaluates particulate emission rates of conventional and Oregon-certified catalytic and noncatalytic woodstoves in six Portland, OR, houses. EPA Methods 5G and 5H and the field emission sampler (A...

  1. Vitamin B12 assays compared by use of patients sera with low vitamin B12 content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheridan, B.L.; Pearce, L.C.

    1985-05-01

    The authors compared four radioisotope dilution (RD) methods and a microbiological assay for measuring concentrations of vitamin B12 in a selected panel of serum samples from patients known to be deficient in the vitamin. Low (less than 100 ng/L) and borderline (100-180 ng/L) results were similar between methods, but use of the manufacturers recommended ranges for borderline results would have changed the diagnostic classifications for 22 of 38 samples. Results of all the RD methods inter-correlated well, but less so with the microbiological assay. Borderline, nondiagnostic results were common to all methods, and no apparent advantage was gained from usingmore » the microbiological assay.« less

  2. A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients

    PubMed Central

    Koyuncu, Sevinc; Haggblom, Per

    2009-01-01

    Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis) was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV) and the international standard method (EN ISO 6579:2002). Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW) were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn), performed less well due to many false-negative results on Brilliant Green agar (BGA) plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably. PMID:19192298

  3. Comparative analysis of methods for concentrating venom from jellyfish Rhopilema esculentum Kishinouye

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Yu, Huahua; Feng, Jinhua; Chen, Xiaolin; Li, Pengcheng

    2009-02-01

    In this study, several methods were compared for the efficiency to concentrate venom from the tentacles of jellyfish Rhopilema esculentum Kishinouye. The results show that the methods using either freezing-dry or gel absorption to remove water to concentrate venom are not applicable due to the low concentration of the compounds dissolved. Although the recovery efficiency and the total venom obtained using the dialysis dehydration method are high, some proteins can be lost during the concentrating process. Comparing to the lyophilization method, ultrafiltration is a simple way to concentrate the compounds at high percentage but the hemolytic activities of the proteins obtained by ultrafiltration appear to be lower. Our results suggest that overall lyophilization is the best and recommended method to concentrate venom from the tentacles of jellyfish. It shows not only the high recovery efficiency for the venoms but high hemolytic activities as well.

  4. CompareSVM: supervised, Support Vector Machine (SVM) inference of gene regularity networks.

    PubMed

    Gillani, Zeeshan; Akash, Muhammad Sajid Hamid; Rahaman, M D Matiur; Chen, Ming

    2014-11-30

    Predication of gene regularity network (GRN) from expression data is a challenging task. There are many methods that have been developed to address this challenge ranging from supervised to unsupervised methods. Most promising methods are based on support vector machine (SVM). There is a need for comprehensive analysis on prediction accuracy of supervised method SVM using different kernels on different biological experimental conditions and network size. We developed a tool (CompareSVM) based on SVM to compare different kernel methods for inference of GRN. Using CompareSVM, we investigated and evaluated different SVM kernel methods on simulated datasets of microarray of different sizes in detail. The results obtained from CompareSVM showed that accuracy of inference method depends upon the nature of experimental condition and size of the network. For network with nodes (<200) and average (over all sizes of networks), SVM Gaussian kernel outperform on knockout, knockdown, and multifactorial datasets compared to all the other inference methods. For network with large number of nodes (~500), choice of inference method depend upon nature of experimental condition. CompareSVM is available at http://bis.zju.edu.cn/CompareSVM/ .

  5. Development of a Coordinate Transformation method for direct georeferencing in map projection frames

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao

    2013-03-01

    This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.

  6. A comparative study of electrochemical machining process parameters by using GA and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, S. K.; Thomas, B.

    2017-11-01

    In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.

  7. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  8. Boar taint detection: A comparison of three sensory protocols.

    PubMed

    Trautmann, Johanna; Meier-Dinkel, Lisa; Gertheiss, Jan; Mörlein, Daniel

    2016-01-01

    While recent studies state an important role of human sensory methods for daily routine control of so-called boar taint, the evaluation of different heating methods is still incomplete. This study investigated three common heating methods (microwave (MW), hot-water (HW), hot-iron (HI)) for boar fat evaluation. The comparison was carried out on 72 samples with a 10-person sensory panel. The heating method significantly affected the probability of a deviant rating. Compared to an assumed 'gold standard' (chemical analysis), the performance was best for HI when both sensitivity and specificity were considered. The results show the superiority of the panel result compared to individual assessors. However, the consistency of the individual sensory ratings was not significantly different between MW, HW, and HI. The three protocols showed only fair to moderate agreement. Concluding from the present results, the hot-iron method appears to be advantageous for boar taint evaluation as compared to microwave and hot-water. Copyright © 2015. Published by Elsevier Ltd.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  10. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter.

    PubMed

    Tsapakis, Stylianos; Papaconstantinou, Dimitrios; Diagourtas, Andreas; Droutsas, Konstantinos; Andreanos, Konstantinos; Moschos, Marilita M; Brouzas, Dimitrios

    2017-01-01

    To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter. Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points) were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter. High correlation coefficient ( r =0.808, P <0.0001) was found between the virtual reality visual field test and the Humphrey perimeter visual field. Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use.

  11. Parametric and experimental analysis using a power flow approach

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1988-01-01

    Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.

  12. The generalized scattering coefficient method for plane wave scattering in layered structures

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song

    2017-02-01

    The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.

  13. Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.

    PubMed

    Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël

    2016-09-01

    A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.

  14. Measurement and interpretation of skin prick test results.

    PubMed

    van der Valk, J P M; Gerth van Wijk, R; Hoorn, E; Groenendijk, L; Groenendijk, I M; de Jong, N W

    2015-01-01

    There are several methods to read skin prick test results in type-I allergy testing. A commonly used method is to characterize the wheal size by its 'average diameter'. A more accurate method is to scan the area of the wheal to calculate the actual size. In both methods, skin prick test (SPT) results can be corrected for histamine-sensitivity of the skin by dividing the results of the allergic reaction by the histamine control. The objectives of this study are to compare different techniques of quantifying SPT results, to determine a cut-off value for a positive SPT for histamine equivalent prick -index (HEP) area, and to study the accuracy of predicting cashew nut reactions in double-blind placebo-controlled food challenge (DBPCFC) tests with the different SPT methods. Data of 172 children with cashew nut sensitisation were used for the analysis. All patients underwent a DBPCFC with cashew nut. Per patient, the average diameter and scanned area of the wheal size were recorded. In addition, the same data for the histamine-induced wheal were collected for each patient. The accuracy in predicting the outcome of the DBPCFC using four different SPT readings (i.e. average diameter, area, HEP-index diameter, HEP-index area) were compared in a Receiver-Operating Characteristic (ROC) plot. Characterizing the wheal size by the average diameter method is inaccurate compared to scanning method. A wheal average diameter of 3 mm is generally considered as a positive SPT cut-off value and an equivalent HEP-index area cut-off value of 0.4 was calculated. The four SPT methods yielded a comparable area under the curve (AUC) of 0.84, 0.85, 0.83 and 0.83, respectively. The four methods showed comparable accuracy in predicting cashew nut reactions in a DBPCFC. The 'scanned area method' is theoretically more accurate in determining the wheal area than the 'average diameter method' and is recommended in academic research. A HEP-index area of 0.4 is determined as cut-off value for a positive SPT. However, in clinical practice, the 'average diameter method' is also useful, because this method provides similar accuracy in predicting cashew nut allergic reactions in the DBPCFC. Trial number NTR3572.

  15. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  16. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  17. Smartphone Assessment of Knee Flexion Compared to Radiographic Standards

    PubMed Central

    Dietz, Matthew J.; Sprando, Daniel; Hanselman, Andrew E.; Regier, Michael D.; Frye, Benjamin M.

    2017-01-01

    Purpose Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Methods Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. Results The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC = 0.94; 95% CI: 0.91–0.96). Visual estimation was found to be the least reliable method of measurement. Conclusions The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. PMID:28179062

  18. Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper

    NASA Astrophysics Data System (ADS)

    Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.

    2017-04-01

    This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.

  19. The effect of sampling techniques used in the multiconfigurational Ehrenfest method

    NASA Astrophysics Data System (ADS)

    Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.

    2018-05-01

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  20. The effect of sampling techniques used in the multiconfigurational Ehrenfest method.

    PubMed

    Symonds, C; Kattirtzi, J A; Shalashilin, D V

    2018-05-14

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  1. Hip joint center localisation: A biomechanical application to hip arthroplasty population

    PubMed Central

    Bouffard, Vicky; Begon, Mickael; Champagne, Annick; Farhadnia, Payam; Vendittoli, Pascal-André; Lavigne, Martin; Prince, François

    2012-01-01

    AIM: To determine hip joint center (HJC) location on hip arthroplasty population comparing predictive and functional approaches with radiographic measurements. METHODS: The distance between the HJC and the mid-pelvis was calculated and compared between the three approaches. The localisation error between the predictive and functional approach was compared using the radiographic measurements as the reference. The operated leg was compared to the non-operated leg. RESULTS: A significant difference was found for the distance between the HJC and the mid-pelvis when comparing the predictive and functional method. The functional method leads to fewer errors. A statistical difference was found for the localization error between the predictive and functional method. The functional method is twice more precise. CONCLUSION: Although being more individualized, the functional method improves HJC localization and should be used in three-dimensional gait analysis. PMID:22919569

  2. Musical Practices and Methods in Music Lessons: A Comparative Study of Estonian and Finnish General Music Education

    ERIC Educational Resources Information Center

    Sepp, Anu; Ruokonen, Inkeri; Ruismäki, Heikki

    2015-01-01

    This article reveals the results of a comparative study of Estonian and Finnish general music education. The aim was to find out what music teaching practices and approaches/methods were mostly used, what music education perspectives supported those practices. The data were collected using questionnaires and the results of 107 Estonian and 50…

  3. Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations

    ERIC Educational Resources Information Center

    Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey

    2016-01-01

    The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…

  4. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  5. Methods of experimentation with models and utilization of results

    NASA Technical Reports Server (NTRS)

    Robert,

    1924-01-01

    The present report treats the subject of testing small models in a wind tunnel and of the methods employed for rendering the results constant, accurate and comparable with one another. Detailed experimental results are given.

  6. Comparing and improving reconstruction methods for proxies based on compositional data

    NASA Astrophysics Data System (ADS)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  7. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  8. Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples

    USGS Publications Warehouse

    Ahlbrandt, T.S.; Klett, T.R.

    2005-01-01

    Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods. A geologically based model, such as one using the total petroleum system approach, is preferred in that it combines the elements of petroleum source, reservoir, trap and seal with the tectono-stratigraphic history of basin evolution with petroleum resource potential. Care must be taken to demonstrate that homogeneous populations in terms of geology, geologic risk, exploration, and discovery processes are used in the assessment process. The USGS 2000 method (7th Approximation Model, EMC computational program) is robust; that is, it can be used in both mature and immature areas, and provides comparable results when using different geologic models (e.g. stratigraphic or structural) with differing amounts of subdivisions, assessment units, within the total petroleum system. ?? 2005 International Association for Mathematical Geology.

  9. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  10. Effects of test method and participant musical training on preference ratings of stimuli with different reverberation times.

    PubMed

    Lawless, Martin S; Vigeant, Michelle C

    2017-10-01

    Selecting an appropriate listening test design for concert hall research depends on several factors, including listening test method and participant critical-listening experience. Although expert listeners afford more reliable data, their perceptions may not be broadly representative. The present paper contains two studies that examined the validity and reliability of the data obtained from two listening test methods, a successive and a comparative method, and two types of participants, musicians and non-musicians. Participants rated their overall preference of auralizations generated from eight concert hall conditions with a range of reverberation times (0.0-7.2 s). Study 1, with 34 participants, assessed the two methods. The comparative method yielded similar results and reliability as the successive method. Additionally, the comparative method was rated as less difficult and more preferable. For study 2, an additional 37 participants rated the stimuli using the comparative method only. An analysis of variance of the responses from both studies revealed that musicians are better than non-musicians at discerning their preferences across stimuli. This result was confirmed with a k-means clustering analysis on the entire dataset that revealed five preference groups. Four groups exhibited clear preferences to the stimuli, while the fifth group, predominantly comprising non-musicians, demonstrated no clear preference.

  11. Laser notching ceramics for reliable fracture toughness testing

    DOE PAGES

    Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...

    2015-09-19

    A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less

  12. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  13. Properties of natural rubber/attapulgite composites prepared by latex compounding method: Effect of filler loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muttalib, Siti Nadzirah Abdul, E-mail: sitinadzirah.amn@gmail.com; Othman, Nadras, E-mail: srnadras@usm.my; Ismail, Hanafi, E-mail: ihanafi@usm.my

    This paper reports on the effect of filler loading on properties of natural rubber (NR)/attapulgite (ATP) composites. The NR/ATP composites were prepared by latex compounding method. It is called as masterbatch. The masterbatch was subsequently added to the NR through melt mixing process. The vulcanized NR/ATP composites were subjected to mechanical, swelling and morphological tests. All the results were compared with NR/ATP composites prepared by conventional system. The composites from masterbatch method showed better results compared to composites prepared by conventional method. They have higher tensile properties, elongation at break and tear strength. The images captured through scanning electron microscopymore » test revealed the improvement of tensile strength in masterbatch NR/ATP composites. It can be seen clearly that masterbatch NR/ATP have better filler dispersion compared to conventional method NR/ATP composites.« less

  14. Subtask 4.27 - Evaluation of the Multielement Sorbent Trap (MEST) Method at an Illinois Coal-Fired Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlish, John; Thompson, Jeffrey; Dunham, Grant

    2014-09-30

    Owners of fossil fuel-fired power plants face the challenge of measuring stack emissions of trace metals and acid gases at much lower levels than in the past as a result of increasingly stringent regulations. In the United States, the current reference methods for trace metals and halogens are wet-chemistry methods, U.S. Environmental Protection Agency (EPA) Methods 29 and 26 or 26A, respectively. As a possible alternative to the EPA methods, the Energy & Environmental Research Center (EERC) has developed a novel multielement sorbent trap (MEST) method to be used to sample for trace elements and/or halogens. Sorbent traps offer amore » potentially advantageous alternative to the existing sampling methods, as they are simpler to use and do not require expensive, breakable glassware or handling and shipping of hazardous reagents. Field tests comparing two sorbent trap applications (MEST-H for hydrochloric acid and MEST-M for trace metals) with the reference methods were conducted at two power plant units fueled by Illinois Basin bituminous coal. For hydrochloric acid, MEST measured concentrations comparable to EPA Method 26A at two power plant units, one with and one without a wet flue gas desulfurization scrubber. MEST-H provided lower detection limits for hydrochloric acid than the reference method. Results from a dry stack unit had better comparability between methods than results from a wet stack unit. This result was attributed to the very low emissions in the latter unit, as well as the difficulty of sampling in a saturated flue gas. Based on these results, the MEST-H sorbent traps appear to be a good candidate to serve as an alternative to Method 26A (or 26). For metals, the MEST trap gave lower detection limits compared to EPA Method 29 and produced comparable data for antimony, arsenic, beryllium, cobalt, manganese, selenium, and mercury for most test runs. However, the sorbent material produced elevated blanks for cadmium, nickel, lead, and chromium at levels that would interfere with accurate measurement at U.S. hazardous air pollutant emission limits for existing coal-fired power plant units. Longer sampling times employed during this test program did appear to improve comparative results for these metals. Although the sorbent contribution to the sample was reduced through improved trap design, additional research is still needed to explore lower-background materials before the MEST-M application can be considered as a potential alternative method for all of the trace metals. This subtask was funded through the EERC–U.S. Department of Energy Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FC26-08NT43291. Nonfederal funding was provided by the Electric Power Research Institute, the Illinois Clean Coal Institute, Southern Illinois Power Company, and the Center for Air Toxic Metals Affiliates Program.« less

  15. Method of Curved Models and Its Application to the Study of Curvilinear Flight of Airships. Part II

    NASA Technical Reports Server (NTRS)

    Gourjienko, G A

    1937-01-01

    This report compares the results obtained by the aid of curved models with the results of tests made by the method of damped oscillations, and with flight tests. Consequently we shall be able to judge which method of testing in the tunnel produces results that are in closer agreement with flight test results.

  16. Comparison of Marine Spatial Planning Methods in Madagascar Demonstrates Value of Alternative Approaches

    PubMed Central

    Allnutt, Thomas F.; McClanahan, Timothy R.; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J. M.; Tianarisoa, Tantely F.; Watson, Reg; Kremen, Claire

    2012-01-01

    The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the “strict protection” class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals. PMID:22359534

  17. Comparison of marine spatial planning methods in Madagascar demonstrates value of alternative approaches.

    PubMed

    Allnutt, Thomas F; McClanahan, Timothy R; Andréfouët, Serge; Baker, Merrill; Lagabrielle, Erwann; McClennen, Caleb; Rakotomanjaka, Andry J M; Tianarisoa, Tantely F; Watson, Reg; Kremen, Claire

    2012-01-01

    The Government of Madagascar plans to increase marine protected area coverage by over one million hectares. To assist this process, we compare four methods for marine spatial planning of Madagascar's west coast. Input data for each method was drawn from the same variables: fishing pressure, exposure to climate change, and biodiversity (habitats, species distributions, biological richness, and biodiversity value). The first method compares visual color classifications of primary variables, the second uses binary combinations of these variables to produce a categorical classification of management actions, the third is a target-based optimization using Marxan, and the fourth is conservation ranking with Zonation. We present results from each method, and compare the latter three approaches for spatial coverage, biodiversity representation, fishing cost and persistence probability. All results included large areas in the north, central, and southern parts of western Madagascar. Achieving 30% representation targets with Marxan required twice the fish catch loss than the categorical method. The categorical classification and Zonation do not consider targets for conservation features. However, when we reduced Marxan targets to 16.3%, matching the representation level of the "strict protection" class of the categorical result, the methods show similar catch losses. The management category portfolio has complete coverage, and presents several management recommendations including strict protection. Zonation produces rapid conservation rankings across large, diverse datasets. Marxan is useful for identifying strict protected areas that meet representation targets, and minimize exposure probabilities for conservation features at low economic cost. We show that methods based on Zonation and a simple combination of variables can produce results comparable to Marxan for species representation and catch losses, demonstrating the value of comparing alternative approaches during initial stages of the planning process. Choosing an appropriate approach ultimately depends on scientific and political factors including representation targets, likelihood of adoption, and persistence goals.

  18. Comparison of flow cytometry, fluorescence microscopy and spectrofluorometry for analysis of gene electrotransfer efficiency.

    PubMed

    Marjanovič, Igor; Kandušer, Maša; Miklavčič, Damijan; Keber, Mateja Manček; Pavlin, Mojca

    2014-12-01

    In this study, we compared three different methods used for quantification of gene electrotransfer efficiency: fluorescence microscopy, flow cytometry and spectrofluorometry. We used CHO and B16 cells in a suspension and plasmid coding for GFP. The aim of this study was to compare and analyse the results obtained by fluorescence microscopy, flow cytometry and spectrofluorometry and in addition to analyse the applicability of spectrofluorometry for quantifying gene electrotransfer on cells in a suspension. Our results show that all the three methods detected similar critical electric field strength, around 0.55 kV/cm for both cell lines. Moreover, results obtained on CHO cells showed that the total fluorescence intensity and percentage of transfection exhibit similar increase in response to increase electric field strength for all the three methods. For B16 cells, there was a good correlation at low electric field strengths, but at high field strengths, flow cytometer results deviated from results obtained by fluorescence microscope and spectrofluorometer. Our study showed that all the three methods detected similar critical electric field strengths and high correlations of results were obtained except for B16 cells at high electric field strengths. The results also demonstrated that flow cytometry measures higher values of percentage transfection compared to microscopy. Furthermore, we have demonstrated that spectrofluorometry can be used as a simple and consistent method to determine gene electrotransfer efficiency on cells in a suspension.

  19. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    NASA Astrophysics Data System (ADS)

    Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.

    2015-11-01

    In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs) by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-Rocky Mountain Biogenic Aerosol Study) ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  20. Methods for comparative evaluation of propulsion system designs for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.

    1976-01-01

    The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.

  1. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  2. Comparative evaluation of power factor impovement techniques for squirrel cage induction motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spee, R.; Wallace, A.K.

    1992-04-01

    This paper describes the results obtained from a series of tests of relatively simple methods of improving the power factor of squirrel-cage induction motors. The methods, which are evaluated under controlled laboratory conditions for a 10-hp, high-efficiency motor, include terminal voltage reduction; terminal static capacitors; and a floating'' winding with static capacitors. The test results are compared with equivalent circuit model predictions that are then used to identify optimum conditions for each of the power factor improvement techniques compared with the basic induction motor. Finally, the relative economic value, and the implications of component failures, of the three methods aremore » discussed.« less

  3. Comparing Interrater reliability between eye examination and eye self-examination 1

    PubMed Central

    de Lima, Maria Alzete; Pagliuca, Lorita Marlena Freitag; do Nascimento, Jennara Cândido; Caetano, Joselany Áfio

    2017-01-01

    Resume Objective: to compare Interrater reliability concerning two eye assessment methods. Method: quasi-experimental study conducted with 324 college students including eye self-examination and eye assessment performed by the researchers in a public university. Kappa coefficient was used to verify agreement. Results: reliability coefficients between Interraters ranged from 0.85 to 0.95, with statistical significance at 0.05. The exams to check for near acuity and peripheral vision presented a reasonable kappa >0.2. The remaining coefficients were higher, ranging from very to totally reliable. Conclusion: comparatively, the results of both methods were similar. The virtual manual on eye self-examination can be used to screen for eye conditions. PMID:29069269

  4. Plant species classification using flower images—A comparative study of local feature representations

    PubMed Central

    Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999

  5. Comparative Measurements of Radon Concentration in Soil Using Passive and Active Methods in High Level Natural Radiation Area (HLNRA) of Ramsar

    PubMed Central

    Amanat, B; Kardan, M R; Faghihi, R; Hosseini Pooya, S M

    2013-01-01

    Background: Radon and its daughters are amongst the most important sources of natural exposure in the world. Soil is one of the significant sources of radon/thoron due to both radium and thorium so that the emanated thoron from it may cause increased uncertainties in radon measurements. Recently, a diffusion chamber has been designed and optimized for passive discriminative measurements of radon/thoron concentrations in soil. Objective: In order to evaluate the capability of the passive method, some comparative measurements (with active methods) have been performed. Method: The method is based upon measurements by a diffusion chamber, including two Lexan polycarbonate SSNTDs, which can discriminate the emanated radon/thorn from the soil by delay method. The comparative measurements have been done in ten selected points of HLNRA of Ramsar in Iran. The linear regression and correlation between the results of two methods have been studied. Results: The results show that the radon concentrations are within the range of 12.1 to 165 kBq/m3 values. The correlation between the results of active and passive methods was measured by 0.99 value. As well, the thoron concentrations have been measured between 1.9 to 29.5 kBq/m3 values at the points. Conclusion: The sensitivity as well as the strong correlation with active measurements shows that the new low-cost passive method is appropriate for accurate seasonal measurements of radon and thoron concentration in soil. PMID:25505760

  6. Three Dimensional Aerodynamic Analysis of a High-Lift Transport Configuration

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1993-01-01

    Two computational methods, a surface panel method and an Euler method employing unstructured grid methodology, were used to analyze a subsonic transport aircraft in cruise and high-lift conditions. The computational results were compared with two separate sets of flight data obtained for the cruise and high-lift configurations. For the cruise configuration, the surface pressures obtained by the panel method and the Euler method agreed fairly well with results from flight test. However, for the high-lift configuration considerable differences were observed when the computational surface pressures were compared with the results from high-lift flight test. On the lower surface of all the elements with the exception of the slat, both the panel and Euler methods predicted pressures which were in good agreement with flight data. On the upper surface of all the elements the panel method predicted slightly higher suction compared to the Euler method. On the upper surface of the slat, pressure coefficients obtained by both the Euler and panel methods did not agree with the results of the flight tests. A sensitivity study of the upward deflection of the slat from the 40 deg. flap setting suggested that the differences in the slat deflection between the computational model and the flight configuration could be one of the sources of this discrepancy. The computation time for the implicit version of the Euler code was about 1/3 the time taken by the explicit version though the implicit code required 3 times the memory taken by the explicit version.

  7. Evaluation of Techniques for Measuring Microbial Hazards in Bathing Waters: A Comparative Study

    PubMed Central

    Schang, Christelle; Henry, Rebekah; Kolotelo, Peter A.; Prosser, Toby; Crosbie, Nick; Grant, Trish; Cottam, Darren; O’Brien, Peter; Coutts, Scott; Deletic, Ana; McCarthy, David T.

    2016-01-01

    Recreational water quality is commonly monitored by means of culture based faecal indicator organism (FIOs) assays. However, these methods are costly and time-consuming; a serious disadvantage when combined with issues such as non-specificity and user bias. New culture and molecular methods have been developed to counter these drawbacks. This study compared industry-standard IDEXX methods (Colilert and Enterolert) with three alternative approaches: 1) TECTA™ system for E. coli and enterococci; 2) US EPA’s 1611 method (qPCR based enterococci enumeration); and 3) Next Generation Sequencing (NGS). Water samples (233) were collected from riverine, estuarine and marine environments over the 2014–2015 summer period and analysed by the four methods. The results demonstrated that E. coli and coliform densities, inferred by the IDEXX system, correlated strongly with the TECTA™ system. The TECTA™ system had further advantages in faster turnaround times (~12 hrs from sample receipt to result compared to 24 hrs); no staff time required for interpretation and less user bias (results are automatically calculated, compared to subjective colorimetric decisions). The US EPA Method 1611 qPCR method also showed significant correlation with the IDEXX enterococci method; but had significant disadvantages such as highly technical analysis and higher operational costs (330% of IDEXX). The NGS method demonstrated statistically significant correlations between IDEXX and the proportions of sequences belonging to FIOs, Enterobacteriaceae, and Enterococcaceae. While costs (3,000% of IDEXX) and analysis time (300% of IDEXX) were found to be significant drawbacks of NGS, rapid technological advances in this field will soon see it widely adopted. PMID:27213772

  8. Evaluation of methods for calculating maximum allowable standing height in amputees competing in Paralympic athletics.

    PubMed

    Connick, M J; Beckman, E; Ibusuki, T; Malone, L; Tweedy, S M

    2016-11-01

    The International Paralympic Committee has a maximum allowable standing height (MASH) rule that limits stature to a pre-trauma estimation. The MASH rule reduces the probability that bilateral lower limb amputees use disproportionately long prostheses in competition. Although there are several methods for estimating stature, the validity of these methods has not been compared. To identify the most appropriate method for the MASH rule, this study aimed to compare the criterion validity of estimations resulting from the current method, the Contini method, and four Canda methods (Canda-1, Canda-2, Canda-3, and Canda-4). Stature, ulna length, demispan, sitting height, thigh length, upper arm length, and forearm length measurements in 31 males and 30 females were used to calculate the respective estimation for each method. Results showed that Canda-1 (based on four anthropometric variables) produced the smallest error and best fitted the data in males and females. The current method was associated with the largest error of those tests because it increasingly overestimated height in people with smaller stature. The results suggest that the set of Canda equations provide a more valid MASH estimation in people with a range of upper limb and bilateral lower limb amputations compared with the current method. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Robust range estimation with a monocular camera for vision-based forward collision warning system.

    PubMed

    Park, Ki-Yeong; Hwang, Sun-Young

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  10. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  11. Effect of joint spacing and joint dip on the stress distribution around tunnels using different numerical methods

    NASA Astrophysics Data System (ADS)

    Nikadat, Nooraddin; Fatehi Marji, Mohammad; Rahmannejad, Reza; Yarahmadi Bafghi, Alireza

    2016-11-01

    Different conditions may affect the stability of tunnels by the geometry (spacing and orientation) of joints in the surrounded rock mass. In this study, by comparing the results obtained by the three novel numerical methods i.e. finite element method (Phase2), discrete element method (UDEC) and indirect boundary element method (TFSDDM), the effects of joint spacing and joint dips on the stress distribution around rock tunnels are numerically studied. These comparisons indicate the validity of the stress analyses around circular rock tunnels. These analyses also reveal that for a semi-continuous environment, boundary element method gives more accurate results compared to the results of finite element and distinct element methods. In the indirect boundary element method, the displacements due to joints of different spacing and dips are estimated by using displacement discontinuity (DD) formulations and the total stress distribution around the tunnel are obtained by using fictitious stress (FS) formulations.

  12. [A new non-contact method based on relative spectral intensity for determining junction temperature of LED].

    PubMed

    Qiu, Xi-Zhen; Zhang, Fang-Hui

    2013-01-01

    The high-power white LED was prepared based on the high thermal conductivity aluminum, blue chips and YAG phosphor. By studying the spectral of different junction temperature, we found that the radiation spectrum of white LED has a minimum at 485 nm. The radiation intensity at this wavelength and the junction temperature show a good linear relationship. The LED junction temperature was measured based on the formula of relative spectral intensity and junction temperature. The result measured by radiation intensity method was compared with the forward voltage method and spectral method. The experiment results reveal that the junction temperature measured by this method was no more than 2 degrees C compared with the forward voltage method. It maintains the accuracy of the forward voltage method and overcomes the small spectral shift of spectral method, which brings the shortcoming on the results. It also had the advantages of practical, efficient and intuitive, noncontact measurement, and non-destruction to the lamp structure.

  13. Corrosion of metals in wood : comparing the results of a rapid test method with long-term exposure tests across six wood treatments

    Treesearch

    Samuel L. Zelinka; Donald S. Stone

    2011-01-01

    This paper compares two methods of measuring the corrosion of steel and galvanized steel in wood: a long-term exposure test in solid wood and a rapid test method where fasteners are electrochemically polarized in extracts of wood treated with six different treatments. For traditional wood preservatives, the electrochemical extract method correlates with solid wood...

  14. Implementation of density functional theory method on object-oriented programming (C++) to calculate energy band structure using the projector augmented wave (PAW)

    NASA Astrophysics Data System (ADS)

    Alfianto, E.; Rusydi, F.; Aisyah, N. D.; Fadilla, R. N.; Dipojono, H. K.; Martoprawiro, M. A.

    2017-05-01

    This study implemented DFT method into the C++ programming language with object-oriented programming rules (expressive software). The use of expressive software results in getting a simple programming structure, which is similar to mathematical formula. This will facilitate the scientific community to develop the software. We validate our software by calculating the energy band structure of Silica, Carbon, and Germanium with FCC structure using the Projector Augmented Wave (PAW) method then compare the results to Quantum Espresso calculation’s results. This study shows that the accuracy of the software is 85% compared to Quantum Espresso.

  15. A flexible statistical model for alignment of label-free proteomics data – incorporating ion mobility and product ion information

    PubMed Central

    2013-01-01

    Background The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing - the matching of peptide measurements across samples. Results We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Conclusions Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods. PMID:24341404

  16. Quantitation of TGF-beta1 mRNA in porcine mesangial cells by comparative kinetic RT/PCR: comparison with ribonuclease protection assay and in situ hybridization.

    PubMed

    Ceol, M; Forino, M; Gambaro, G; Sauer, U; Schleicher, E D; D'Angelo, A; Anglani, F

    2001-01-01

    Gene expression can be examined with different techniques including ribonuclease protection assay (RPA), in situ hybridisation (ISH), and quantitative reverse transcription-polymerase chain reaction (RT/PCR). These methods differ considerably in their sensitivity and precision in detecting and quantifying low abundance mRNA. Although there is evidence that RT/PCR can be performed in a quantitative manner, the quantitative capacity of this method is generally underestimated. To demonstrate that the comparative kinetic RT/PCR strategy-which uses a housekeeping gene as internal standard-is a quantitative method to detect significant differences in mRNA levels between different samples, the inhibitory effect of heparin on phorbol 12-myristate 13-acetate (PMA)-induced-TGF-beta1 mRNA expression was evaluated by RT/PCR and RPA, the standard method of mRNA quantification, and the results were compared. The reproducibility of RT/PCR amplification was calculated by comparing the quantity of G3PDH and TGF-beta1 PCR products, generated during the exponential phases, estimated from two different RT/PCR (G3PDH, r = 0.968, P = 0.0000; TGF-beta1, r = 0.966, P = 0.0000). The quantitative capacity of comparative kinetic RT/PCR was demonstrated by comparing the results obtained from RPA and RT/PCR using linear regression analysis. Starting from the same RNA extraction, but using only 1% of the RNA for the RT/PCR compared to RPA, significant correlation was observed (r = 0.984, P = 0.0004). Moreover the morphometric analysis of ISH signal was applied for the semi-quantitative evaluation of the expression and localisation of TGF-beta1 mRNA in the entire cell population. Our results demonstrate the close similarity of the RT/PCR and RPA methods in giving quantitative information on mRNA expression and indicate the possibility to adopt the comparative kinetic RT/PCR as reliable quantitative method of mRNA analysis. Copyright 2001 Wiley-Liss, Inc.

  17. GoPros™ as an underwater photogrammetry tool for citizen science

    PubMed Central

    David, Peter A.; Dupont, Sally F.; Mathewson, Ciaran P.; O’Neill, Samuel J.; Powell, Nicholas N.; Williamson, Jane E.

    2016-01-01

    Citizen science can increase the scope of research in the marine environment; however, it suffers from necessitating specialized training and simplified methodologies that reduce research output. This paper presents a simplified, novel survey methodology for citizen scientists, which combines GoPro imagery and structure from motion to construct an ortho-corrected 3D model of habitats for analysis. Results using a coral reef habitat were compared to surveys conducted with traditional snorkelling methods for benthic cover, holothurian counts, and coral health. Results were comparable between the two methods, and structure from motion allows the results to be analysed off-site for any chosen visual analysis. The GoPro method outlined in this study is thus an effective tool for citizen science in the marine environment, especially for comparing changes in coral cover or volume over time. PMID:27168973

  18. GoPros™ as an underwater photogrammetry tool for citizen science.

    PubMed

    Raoult, Vincent; David, Peter A; Dupont, Sally F; Mathewson, Ciaran P; O'Neill, Samuel J; Powell, Nicholas N; Williamson, Jane E

    2016-01-01

    Citizen science can increase the scope of research in the marine environment; however, it suffers from necessitating specialized training and simplified methodologies that reduce research output. This paper presents a simplified, novel survey methodology for citizen scientists, which combines GoPro imagery and structure from motion to construct an ortho-corrected 3D model of habitats for analysis. Results using a coral reef habitat were compared to surveys conducted with traditional snorkelling methods for benthic cover, holothurian counts, and coral health. Results were comparable between the two methods, and structure from motion allows the results to be analysed off-site for any chosen visual analysis. The GoPro method outlined in this study is thus an effective tool for citizen science in the marine environment, especially for comparing changes in coral cover or volume over time.

  19. Evaluation of serological and molecular tests used to identify Toxoplasma gondii infection in pregnant women attended in a public health service in São Paulo state, Brazil.

    PubMed

    Murata, Fernando Henrique Antunes; Ferreira, Marina Neves; Pereira-Chioccola, Vera Lucia; Spegiorin, Lígia Cosentino Junqueira Franco; Meira-Strejevitch, Cristina da Silva; Gava, Ricardo; Silveira-Carvalho, Aparecida Perpétuo; de Mattos, Luiz Carlos; Brandão de Mattos, Cinara Cássia

    2017-09-01

    Toxoplasmosis during pregnancy can have severe consequences. The use of sensitive and specific serological and molecular methods is extremely important for the correct diagnosis of the disease. We compared the ELISA and ELFA serological methods, conventional PCR (cPCR), Nested PCR and quantitative PCR (qPCR) in the diagnosis of Toxoplasma gondii infection in pregnant women without clinical suspicion of toxoplasmosis (G1=94) and with clinical suspicion of toxoplasmosis (G2=53). The results were compared using the Kappa index, and the sensitivity, specificity, positive predictive value and negative predictive value were calculated. The results of the serological methods showed concordance between the ELISA and ELFA methods even though ELFA identified more positive cases than ELISA. Molecular methods were discrepant with cPCR using B22/23 primers having greater sensitivity and lower specificity compared to the other molecular methods. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  1. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  2. A comparative evaluation of six principal IgY antibody extraction methods.

    PubMed

    Ren, Hao; Yang, Wenjing; Thirumalai, Diraviyam; Zhang, Xiaoying; Schade, Rüdiger

    2016-03-01

    Egg yolk has been considered a promising source of antibodies. Our study was designed to compare six principal IgY extraction methods (water dilution, polyethylene glycol [PEG] precipitation, caprylic acid extraction, chloroform extraction, phenol extraction, and carrageenan extraction), and to assess their relative extraction efficiencies and the purity of the resulting antibodies. The results showed that the organic solvents (chloroform or phenol) minimised the lipid ratio in the egg yolk. The water dilution, PEG precipitation and caprylic acid extraction methods resulted in high yields, and antibodies purified with PEG and carrageenan exhibited high purity. Our results indicate that phenol extraction would be more suitable for preparing high concentrations of IgY for non-therapeutic usage, while the water dilution and carrageenan extraction methods would be more appropriate for use in the preparation of IgY for oral administration. 2016 FRAME.

  3. Gain determination of optical active doped planar waveguides

    NASA Astrophysics Data System (ADS)

    Šmejcký, J.; Jeřábek, V.; Nekvindová, P.

    2017-12-01

    This paper summarizes the results of the gain transmission characteristics measurement carried out on the new ion exchange Ag+ - Na+ optical Er3+ and Yb3+ doped active planar waveguides realized on a silica based glass substrates. The results were used for optimization of the precursor concentration in the glass substrates. The gain measurements were performed by the time domain method using a pulse generator, as well as broadband measurement method using supercontinuum optical source in the wavelength domain. Both methods were compared and the results were graphically processed. It has been confirmed that pulse method is useful as it provides a very accurate measurement of the gain - pumping power characteristics for one wavelength. In the case of radiation spectral characteristics, our measurement exactly determined the maximum gain wavelength bandwidth of the active waveguide. The spectral characteristics of the pumped and unpumped waveguides were compared. The gain parameters of the reported silica-based glasses can be compared with the phosphate-based parameters, typically used for optical active devices application.

  4. Statistical methods for detecting and comparing periodic data and their application to the nycthemeral rhythm of bodily harm: A population based study

    PubMed Central

    2010-01-01

    Background Animals, including humans, exhibit a variety of biological rhythms. This article describes a method for the detection and simultaneous comparison of multiple nycthemeral rhythms. Methods A statistical method for detecting periodic patterns in time-related data via harmonic regression is described. The method is particularly capable of detecting nycthemeral rhythms in medical data. Additionally a method for simultaneously comparing two or more periodic patterns is described, which derives from the analysis of variance (ANOVA). This method statistically confirms or rejects equality of periodic patterns. Mathematical descriptions of the detecting method and the comparing method are displayed. Results Nycthemeral rhythms of incidents of bodily harm in Middle Franconia are analyzed in order to demonstrate both methods. Every day of the week showed a significant nycthemeral rhythm of bodily harm. These seven patterns of the week were compared to each other revealing only two different nycthemeral rhythms, one for Friday and Saturday and one for the other weekdays. PMID:21059197

  5. Comparability and repeatability of three commonly used methods for measuring endurance capacity.

    PubMed

    Baxter-Gilbert, James; Mühlenhaupt, Max; Whiting, Martin J

    2017-12-01

    Measures of endurance (time to exhaustion) have been used to address a wide range of questions in ecomorphological and physiological research, as well as being used as a proxy for survival and fitness. Swimming, stationary (circular) track running, and treadmill running are all commonly used methods for measuring endurance. Despite the use of these methods across a broad range of taxa, how comparable these methods are to one another, and whether they are biologically relevant, is rarely examined. We used Australian water dragons (Intellagama lesueurii), a species that is morphologically adept at climbing, swimming, and running, to compare these three methods of endurance and examined if there is repeatability within and between trial methods. We found that time to exhaustion was not highly repeatable within a method, suggesting that single measures or a mean time to exhaustion across trials are not appropriate. Furthermore, we compared mean maximal endurance times among the three methods, and found that the two running methods (i.e., stationary track and treadmill) were similar, but swimming was distinctly different, resulting in lower mean maximal endurance times. Finally, an individual's endurance rank was not repeatable across methods, suggesting that the three endurance trial methods are not providing similar information about an individual's performance capacity. Overall, these results highlight the need to carefully match a measure of performance capacity with the study species and the research questions being asked so that the methods being used are behaviorally, ecologically, and physiologically relevant. © 2018 Wiley Periodicals, Inc.

  6. Roka Listeria detection method using transcription mediated amplification to detect Listeria species in select foods and surfaces. Performance Tested Method(SM) 011201.

    PubMed

    Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele

    2012-01-01

    The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.

  7. Comparison of microcrystalline characterization results from oil palm midrib alpha cellulose using different delignization method

    NASA Astrophysics Data System (ADS)

    Yuliasmi, S.; Pardede, T. R.; Nerdy; Syahputra, H.

    2017-03-01

    Oil palm midrib is one of the waste generated by palm plants containing 34.89% cellulose. Cellulose has the potential to produce microcrystalline cellulose can be used as an excipient in tablet formulations by direct compression. Microcrystalline cellulose is the result of a controlled hydrolysis of alpha cellulose, so the alpha cellulose extraction process of oil palm midrib greatly affect the quality of the resulting microcrystalline cellulose. The purpose of this study was to compare the microcrystalline cellulose produced from alpha cellulose extracted from oil palm midrib by two different methods. Fisrt delignization method uses sodium hydroxide. Second method uses a mixture of nitric acid and sodium nitrite, and continued with sodium hydroxide and sodium sulfite. Microcrystalline cellulose obtained by both method was characterized separately, including organoleptic test, color reagents test, dissolution test, pH test and determination of functional groups by FTIR. The results was compared with microcrystalline cellulose which has been available on the market. The characterization results showed that microcrystalline cellulose obtained by first method has the most similar characteristics to the microcrystalline cellulose available in the market.

  8. Application of Nemerow Index Method and Integrated Water Quality Index Method in Water Quality Assessment of Zhangze Reservoir

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Feng, Minquan; Hao, Xiaoyan

    2018-03-01

    [Objective] Based on the water quality historical data from the Zhangze Reservoir from the last five years, the water quality was assessed by the integrated water quality identification index method and the Nemerow pollution index method. The results of different evaluation methods were analyzed and compared and the characteristics of each method were identified.[Methods] The suitability of the water quality assessment methods were compared and analyzed, based on these results.[Results] the water quality tended to decrease over time with 2016 being the year with the worst water quality. The sections with the worst water quality were the southern and northern sections.[Conclusion] The results produced by the traditional Nemerow index method fluctuated greatly in each section of water quality monitoring and therefore could not effectively reveal the trend of water quality at each section. The combination of qualitative and quantitative measures of the comprehensive pollution index identification method meant it could evaluate the degree of water pollution as well as determine that the river water was black and odorous. However, the evaluation results showed that the water pollution was relatively low.The results from the improved Nemerow index evaluation were better as the single indicators and evaluation results are in strong agreement; therefore the method is able to objectively reflect the water quality of each water quality monitoring section and is more suitable for the water quality evaluation of the reservoir.

  9. A Method for Improving Temporal and Spatial Resolution of Carbon Dioxide Emissions

    NASA Astrophysics Data System (ADS)

    Gregg, J. S.; Andres, R. J.

    2003-12-01

    Using United States data, a method is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels for each state in the union. This technique employs monthly sales data to estimate the relative monthly proportions of the total annual national fossil fuel use. These proportions are then used to estimate the total monthly carbon dioxide emissions for each state. To assess the success of this technique, the results from this method are compared with the data obtained from other independent methods. To determine the temporal success of the method, the resulting national time series is compared to the model produced by Carbon Dioxide Information Analysis Center (CDIAC) and the current model being developed by T. J. Blasing and C. Broniak at the Oak Ridge National Laboratory (ORNL). The University of North Dakota (UND) method fits well temporally with the results of the CDIAC and current ORNL research. To determine the success of the spatial component, the individual state results are compared to the annual state totals calculated by ORNL. Using ordinary least squares regression, the annual state totals of this method are plotted against the ORNL data. This allows a direct comparison of estimates in the form of ordered pairs against a one-to-one ideal correspondence line, and allows for easy detection of outliers in the results obtained by this estimation method. Analyzing the residuals of the linear regression model for each type of fuel permits an improved understanding of the strengths and shortcomings of the spatial component of this estimation technique. Spatially, the model is successful when compared to the current ORNL research. The primary advantages of this method are its ease of implementation and universal applicability. In general, this technique compares favorably to more labor-intensive methods that rely on more detailed data. The more detailed data is generally not available for most countries in the world. The methodology used here will be applied to other nations in the world to better understand their sub-annual cycle and sub-national spatial distribution of carbon dioxide emissions from fossil fuel consumption. Better understanding of the cycle will lead to better models used for predicting and responding to global environmental changes currently observed and anticipated.

  10. A Reliable, Feasible Method to Observe Neighborhoods at High Spatial Resolution

    PubMed Central

    Kepper, Maura M.; Sothern, Melinda S.; Theall, Katherine P.; Griffiths, Lauren A.; Scribner, Richard; Tseng, Tung-Sung; Schaettle, Paul; Cwik, Jessica M.; Felker-Kantor, Erica; Broyles, Stephanie T.

    2016-01-01

    Introduction Systematic social observation (SSO) methods traditionally measure neighborhoods at street level and have been performed reliably using virtual applications to increase feasibility. Research indicates that collection at even higher spatial resolution may better elucidate the health impact of neighborhood factors, but whether virtual applications can reliably capture social determinants of health at the smallest geographic resolution (parcel level) remains uncertain. This paper presents a novel, parcel-level SSO methodology and assesses whether this new method can be collected reliably using Google Street View and is feasible. Methods Multiple raters (N=5) observed 42 neighborhoods. In 2016, inter-rater reliability (observed agreement and kappa coefficient) was compared for four SSO methods: (1) street-level in person; (2) street-level virtual; (3) parcel-level in person; and (4) parcel-level virtual. Intra-rater reliability (observed agreement and kappa coefficient) was calculated to determine whether parcel-level methods produce results comparable to traditional street-level observation. Results Substantial levels of inter-rater agreement were documented across all four methods; all methods had >70% of items with at least substantial agreement. Only physical decay showed higher levels of agreement (83% of items with >75% agreement) for direct versus virtual rating source. Intra-rater agreement comparing street- versus parcel-level methods resulted in observed agreement >75% for all but one item (90%). Conclusions Results support the use of Google Street View as a reliable, feasible tool for performing SSO at the smallest geographic resolution. Validation of a new parcel-level method collected virtually may improve the assessment of social determinants contributing to disparities in health behaviors and outcomes. PMID:27989289

  11. External quality assurance programs as a tool for verifying standardization of measurement procedures: Pilot collaboration in Europe.

    PubMed

    Perich, C; Ricós, C; Alvarez, V; Biosca, C; Boned, B; Cava, F; Doménech, M V; Fernández-Calle, P; Fernández-Fernández, P; García-Lario, J V; Minchinela, J; Simón, M; Jansen, R

    2014-05-15

    Current external quality assurance schemes have been classified into six categories, according to their ability to verify the degree of standardization of the participating measurement procedures. SKML (Netherlands) is a Category 1 EQA scheme (commutable EQA materials with values assigned by reference methods), whereas SEQC (Spain) is a Category 5 scheme (replicate analyses of non-commutable materials with no values assigned by reference methods). The results obtained by a group of Spanish laboratories participating in a pilot study organized by SKML are examined, with the aim of pointing out the improvements over our current scheme that a Category 1 program could provide. Imprecision and bias are calculated for each analyte and laboratory, and compared with quality specifications derived from biological variation. Of the 26 analytes studied, 9 had results comparable with those from reference methods, and 10 analytes did not have comparable results. The remaining 7 analytes measured did not have available reference method values, and in these cases, comparison with the peer group showed comparable results. The reasons for disagreement in the second group can be summarized as: use of non-standard methods (IFCC without exogenous pyridoxal phosphate for AST and ALT, Jaffé kinetic at low-normal creatinine concentrations and with eGFR); non-commutability of the reference material used to assign values to the routine calibrator (calcium, magnesium and sodium); use of reference materials without established commutability instead of reference methods for AST and GGT, and lack of a systematic effort by manufacturers to harmonize results. Results obtained in this work demonstrate the important role of external quality assurance programs using commutable materials with values assigned by reference methods to correctly monitor the standardization of laboratory tests with consequent minimization of risk to patients. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. A comparative study of novel spectrophotometric methods based on isosbestic points; application on a pharmaceutical ternary mixture

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    This work represents the application of the isosbestic points present in different absorption spectra. Three novel spectrophotometric methods were developed, the first method is the absorption subtraction method (AS) utilizing the isosbestic point in zero-order absorption spectra; the second method is the amplitude modulation method (AM) utilizing the isosbestic point in ratio spectra; and third method is the amplitude summation method (A-Sum) utilizing the isosbestic point in derivative spectra. The three methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The components at the isosbestic point were determined using the corresponding unified regression equation at this point with no need for a complementary method. The obtained results were statistically compared to each other and to that of the developed PLS model. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed.

  13. A comparison of the Sensititre® MYCOTB panel and the agar proportion method for the susceptibility testing of Mycobacterium tuberculosis.

    PubMed

    Abuali, M M; Katariwala, R; LaBombardi, V J

    2012-05-01

    The agar proportion method (APM) for determining Mycobacterium tuberculosis susceptibilities is a qualitative method that requires 21 days in order to produce the results. The Sensititre method allows for a quantitative assessment. Our objective was to compare the accuracy, time to results, and ease of use of the Sensititre method to the APM. 7H10 plates in the APM and 96-well microtiter dry MYCOTB panels containing 12 antibiotics at full dilution ranges in the Sensititre method were inoculated with M. tuberculosis and read for colony growth. Thirty-seven clinical isolates were tested using both methods and 26 challenge strains of blinded susceptibilities were tested using the Sensititre method only. The Sensititre method displayed 99.3% concordance with the APM. The APM provided reliable results on day 21, whereas the Sensititre method displayed consistent results by day 10. The Sensititre method provides a more rapid, quantitative, and efficient method of testing both first- and second-line drugs when compared to the gold standard. It will give clinicians a sense of the degree of susceptibility, thus, guiding the therapeutic decision-making process. Furthermore, the microwell plate format without the need for instrumentation will allow its use in resource-poor settings.

  14. Comparison study of two procedures for the determination of emamectin benzoate in medicated fish feed.

    PubMed

    Farer, Leslie J; Hayes, John M

    2005-01-01

    A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.

  15. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  16. A comparative study of two methods for the orientation of the occlusal plane and the determination of the vertical dimension of occlusion in edentulous patients.

    PubMed

    Koller, M M; Merlini, L; Spandre, G; Palla, S

    1992-07-01

    The aim of this study was to compare two methods used to orientate the occlusal plane (OP) and to determine the vertical dimension of occlusion (VDO). In method A the VDO was established by means of the rest position, the minimal speaking distance, and the patient's profile. Method B used a newly developed registration pin assembly. The VDO was registered using a silicone occlusion rim and the swallowing technique. The results were compared to the values of the new dentures. Three standardized lateral radiographs were taken at the VDO obtained with methods A, B, and at that of the final dentures. On each radiograph the orientation of the OP to the Camper plane and the VDO were measured by two investigators independently. The results indicated no statistically significant differences between the mean VDO with method A and B compared with the new dentures (P greater than 0.05). With both methods it was not possible to orientate the OP parallel to the Camper plane. None of the occlusal planes of the new dentures were parallel either. Their OP diverged on average by 7 degrees dorso-caudally. The time spent with method B to orient the OP and to determine the VDO was significantly lower than with method A (17-50 min).

  17. Clinical outcomes of arthroscopic single and double row repair in full thickness rotator cuff tears

    PubMed Central

    Ji, Jong-Hun; Shafi, Mohamed; Kim, Weon-Yoo; Kim, Young-Yul

    2010-01-01

    Background: There has been a recent interest in the double row repair method for arthroscopic rotator cuff repair following favourable biomechanical results reported by some studies. The purpose of this study was to compare the clinical results of arthroscopic single row and double row repair methods in the full-thickness rotator cuff tears. Materials and Methods: 22 patients of arthroscopic single row repair (group I) and 25 patients who underwent double row repair (group II) from March 2003 to March 2005 were retrospectively evaluated and compared for the clinical outcomes. The mean age was 58 years and 56 years respectively for group I and II. The average follow-up in the two groups was 24 months. The evaluation was done by using the University of California Los Angeles (UCLA) rating scale and the shoulder index of the American Shoulder and Elbow Surgeons (ASES). Results: In Group I, the mean ASES score increased from 30.48 to 87.40 and the mean ASES score increased from 32.00 to 91.45 in the Group II. The mean UCLA score increased from the preoperative 12.23 to 30.82 in Group I and from 12.20 to 32.40 in Group II. Each method has shown no statistical clinical differences between two methods, but based on the sub scores of UCLA score, the double row repair method yields better results for the strength, and it gives more satisfaction to the patients than the single row repair method. Conclusions: Comparing the two methods, double row repair group showed better clinical results in recovering strength and gave more satisfaction to the patients but no statistical clinical difference was found between 2 methods. PMID:20697485

  18. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  19. Experimental study and neural network modeling of sugarcane bagasse pretreatment with H2SO4 and O3 for cellulosic material conversion to sugar.

    PubMed

    Gitifar, Vahid; Eslamloueyan, Reza; Sarshar, Mohammad

    2013-11-01

    In this study, pretreatment of sugarcane bagasse and subsequent enzymatic hydrolysis is investigated using two categories of pretreatment methods: dilute acid (DA) pretreatment and combined DA-ozonolysis (DAO) method. Both methods are accomplished at different solid ratios, sulfuric acid concentrations, autoclave residence times, bagasse moisture content, and ozonolysis time. The results show that the DAO pretreatment can significantly increase the production of glucose compared to DA method. Applying k-fold cross validation method, two optimal artificial neural networks (ANNs) are trained for estimations of glucose concentrations for DA and DAO pretreatment methods. Comparing the modeling results with experimental data indicates that the proposed ANNs have good estimation abilities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Comparing Evolutionary Programs and Evolutionary Pattern Search Algorithms: A Drug Docking Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1999-02-10

    Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and theymore » suggest that EPSAs may be more robust on larger, more complex problems.« less

  1. Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF-36.

    PubMed

    Walters, Stephen J

    2004-05-25

    We describe and compare four different methods for estimating sample size and power, when the primary outcome of the study is a Health Related Quality of Life (HRQoL) measure. These methods are: 1. assuming a Normal distribution and comparing two means; 2. using a non-parametric method; 3. Whitehead's method based on the proportional odds model; 4. the bootstrap. We illustrate the various methods, using data from the SF-36. For simplicity this paper deals with studies designed to compare the effectiveness (or superiority) of a new treatment compared to a standard treatment at a single point in time. The results show that if the HRQoL outcome has a limited number of discrete values (< 7) and/or the expected proportion of cases at the boundaries is high (scoring 0 or 100), then we would recommend using Whitehead's method (Method 3). Alternatively, if the HRQoL outcome has a large number of distinct values and the proportion at the boundaries is low, then we would recommend using Method 1. If a pilot or historical dataset is readily available (to estimate the shape of the distribution) then bootstrap simulation (Method 4) based on this data will provide a more accurate and reliable sample size estimate than conventional methods (Methods 1, 2, or 3). In the absence of a reliable pilot set, bootstrapping is not appropriate and conventional methods of sample size estimation or simulation will need to be used. Fortunately, with the increasing use of HRQoL outcomes in research, historical datasets are becoming more readily available. Strictly speaking, our results and conclusions only apply to the SF-36 outcome measure. Further empirical work is required to see whether these results hold true for other HRQoL outcomes. However, the SF-36 has many features in common with other HRQoL outcomes: multi-dimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions, so therefore, we believe these results and conclusions using the SF-36 will be appropriate for other HRQoL measures.

  2. Center index method-an alternative for wear measurements with radiostereometry (RSA).

    PubMed

    Dahl, Jon; Figved, Wender; Snorrason, Finnur; Nordsletten, Lars; Röhrl, Stephan M

    2013-03-01

    Radiostereometry (RSA) is considered to be the most precise and accurate method for wear-measurements in total hip replacement. Post-operative stereoradiographs has so far been necessary for wear measurement. Hence, the use of RSA has been limited to studies planned for RSA measurements. We compared a new RSA method for wear measurements that does not require previous radiographs with conventional RSA. Instead of comparing present stereoradiographs with post-operative ones, we developed a method for calculating the post-operative position of the center of the femoral head on the present examination and using this as the index measurement. We compared this alternative method to conventional RSA in 27 hips in an ongoing RSA study. We found a high degree of agreement between the methods for both mean proximal (1.19 mm vs. 1.14 mm) and mean 3D wear (1.52 mm vs. 1.44 mm) after 10 years. Intraclass correlation coefficients (ICC) were 0.958 and 0.955, respectively (p<0.001 for both ICCs). The results were also within the limits of agreement when plotted subject-by-subject in a Bland-Altman plot. Our alternative method for wear measurements with RSA offers comparable results to conventional RSA measurements. It allows precise wear measurements without previous radiological examinations. Copyright © 2012 Orthopaedic Research Society.

  3. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  4. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  5. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  6. Comparative Analysis of 3D Bladder Tumor Spheroids Obtained by Forced Floating and Hanging Drop Methods for Drug Screening.

    PubMed

    Amaral, Robson L F; Miranda, Mariza; Marcato, Priscyla D; Swiech, Kamilla

    2017-01-01

    Introduction: Cell-based assays using three-dimensional (3D) cell cultures may reflect the antitumor activity of compounds more accurately, since these models reproduce the tumor microenvironment better. Methods: Here, we report a comparative analysis of cell behavior in the two most widely employed methods for 3D spheroid culture, forced floating (Ultra-low Attachment, ULA, plates), and hanging drop (HD) methods, using the RT4 human bladder cancer cell line as a model. The morphology parameters and growth/metabolism of the spheroids generated were first characterized, using four different cell-seeding concentrations (0.5, 1.25, 2.5, and 3.75 × 10 4 cells/mL), and then, subjected to drug resistance evaluation. Results: Both methods generated spheroids with a smooth surface and round shape in a spheroidization time of about 48 h, regardless of the cell-seeding concentration used. Reduced cell growth and metabolism was observed in 3D cultures compared to two-dimensional (2D) cultures. The optimal range of spheroid diameter (300-500 μm) was obtained using cultures initiated with 0.5 and 1.25 × 10 4 cells/mL for the ULA method and 2.5 and 3.75 × 10 4 cells/mL for the HD method. RT4 cells cultured under 3D conditions also exhibited a higher resistance to doxorubicin (IC 50 of 1.00 and 0.83 μg/mL for the ULA and HD methods, respectively) compared to 2D cultures (IC 50 ranging from 0.39 to 0.43). Conclusions: Comparing the results, we concluded that the forced floating method using ULA plates was considered more suitable and straightforward to generate RT4 spheroids for drug screening/cytotoxicity assays. The results presented here also contribute to the improvement in the standardization of the 3D cultures required for widespread application.

  7. Comparative Analysis of 3D Bladder Tumor Spheroids Obtained by Forced Floating and Hanging Drop Methods for Drug Screening

    PubMed Central

    Amaral, Robson L. F.; Miranda, Mariza; Marcato, Priscyla D.; Swiech, Kamilla

    2017-01-01

    Introduction: Cell-based assays using three-dimensional (3D) cell cultures may reflect the antitumor activity of compounds more accurately, since these models reproduce the tumor microenvironment better. Methods: Here, we report a comparative analysis of cell behavior in the two most widely employed methods for 3D spheroid culture, forced floating (Ultra-low Attachment, ULA, plates), and hanging drop (HD) methods, using the RT4 human bladder cancer cell line as a model. The morphology parameters and growth/metabolism of the spheroids generated were first characterized, using four different cell-seeding concentrations (0.5, 1.25, 2.5, and 3.75 × 104 cells/mL), and then, subjected to drug resistance evaluation. Results: Both methods generated spheroids with a smooth surface and round shape in a spheroidization time of about 48 h, regardless of the cell-seeding concentration used. Reduced cell growth and metabolism was observed in 3D cultures compared to two-dimensional (2D) cultures. The optimal range of spheroid diameter (300–500 μm) was obtained using cultures initiated with 0.5 and 1.25 × 104 cells/mL for the ULA method and 2.5 and 3.75 × 104 cells/mL for the HD method. RT4 cells cultured under 3D conditions also exhibited a higher resistance to doxorubicin (IC50 of 1.00 and 0.83 μg/mL for the ULA and HD methods, respectively) compared to 2D cultures (IC50 ranging from 0.39 to 0.43). Conclusions: Comparing the results, we concluded that the forced floating method using ULA plates was considered more suitable and straightforward to generate RT4 spheroids for drug screening/cytotoxicity assays. The results presented here also contribute to the improvement in the standardization of the 3D cultures required for widespread application. PMID:28878686

  8. Comparison of safety effect estimates obtained from empirical Bayes before-after study, propensity scores-potential outcomes framework, and regression model with cross-sectional data.

    PubMed

    Wood, Jonathan S; Donnell, Eric T; Porter, Richard J

    2015-02-01

    A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Image restoration by the method of convex projections: part 2 applications and numerical results.

    PubMed

    Sezan, M I; Stark, H

    1982-01-01

    The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.

  10. A comparative analysis of chaotic particle swarm optimizations for detecting single nucleotide polymorphism barcodes.

    PubMed

    Chuang, Li-Yeh; Moi, Sin-Hua; Lin, Yu-Da; Yang, Cheng-Hong

    2016-10-01

    Evolutionary algorithms could overcome the computational limitations for the statistical evaluation of large datasets for high-order single nucleotide polymorphism (SNP) barcodes. Previous studies have proposed several chaotic particle swarm optimization (CPSO) methods to detect SNP barcodes for disease analysis (e.g., for breast cancer and chronic diseases). This work evaluated additional chaotic maps combined with the particle swarm optimization (PSO) method to detect SNP barcodes using a high-dimensional dataset. Nine chaotic maps were used to improve PSO method results and compared the searching ability amongst all CPSO methods. The XOR and ZZ disease models were used to compare all chaotic maps combined with PSO method. Efficacy evaluations of CPSO methods were based on statistical values from the chi-square test (χ 2 ). The results showed that chaotic maps could improve the searching ability of PSO method when population are trapped in the local optimum. The minor allele frequency (MAF) indicated that, amongst all CPSO methods, the numbers of SNPs, sample size, and the highest χ 2 value in all datasets were found in the Sinai chaotic map combined with PSO method. We used the simple linear regression results of the gbest values in all generations to compare the all methods. Sinai chaotic map combined with PSO method provided the highest β values (β≥0.32 in XOR disease model and β≥0.04 in ZZ disease model) and the significant p-value (p-value<0.001 in both the XOR and ZZ disease models). The Sinai chaotic map was found to effectively enhance the fitness values (χ 2 ) of PSO method, indicating that the Sinai chaotic map combined with PSO method is more effective at detecting potential SNP barcodes in both the XOR and ZZ disease models. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A comparative uncertainty study of the calibration of macrolide antibiotic reference standards using quantitative nuclear magnetic resonance and mass balance methods.

    PubMed

    Liu, Shu-Yu; Hu, Chang-Qin

    2007-10-17

    This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.

  12. Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model

    NASA Astrophysics Data System (ADS)

    Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie

    2017-11-01

    Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.

  13. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  14. Adaptability of laser diffraction measurement technique in soil physics methodology

    NASA Astrophysics Data System (ADS)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters (e.g. pH(H2O), organic carbon and calcium carbonate content).

  15. Efficacy of Conventional Laser Irradiation Versus a New Method for Gingival Depigmentation (Sieve Method): A Clinical Trial.

    PubMed

    Houshmand, Behzad; Janbakhsh, Noushin; Khalilian, Fatemeh; Talebi Ardakani, Mohammad Reza

    2017-01-01

    Introduction: Diode laser irradiation has recently shown promising results for treatment of gingival pigmentation. This study sought to compare the efficacy of 2 diode laser irradiation protocols for treatment of gingival pigmentations, namely the conventional method and the sieve method. Methods: In this split-mouth clinical trial, 15 patients with gingival pigmentation were selected and their pigmentation intensity was determined using Dummett's oral pigmentation index (DOPI) in different dental regions. Diode laser (980 nm wavelength and 2 W power) was irradiated through a stipple pattern (sieve method) and conventionally in the other side of the mouth. Level of pain and satisfaction with the outcome (both patient and periodontist) were measured using a 0-10 visual analog scale (VAS) for both methods. Patients were followed up at 2 weeks, one month and 3 months. Pigmentation levels were compared using repeated measures of analysis of variance (ANOVA). The difference in level of pain and satisfaction between the 2 groups was analyzed by sample t test and general estimate equation model. Results: No significant differences were found regarding the reduction of pigmentation scores and pain and scores between the 2 groups. The difference in satisfaction with the results at the three time points was significant in both conventional and sieve methods in patients ( P = 0.001) and periodontists ( P = 0.015). Conclusion: Diode laser irradiation in both methods successfully eliminated gingival pigmentations. The sieve method was comparable to conventional technique, offering no additional advantage.

  16. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  18. Real-time segmentation of burst suppression patterns in critical care EEG monitoring

    PubMed Central

    Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.

    2014-01-01

    Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828

  19. Heats of Segregation of BCC Binaries from Ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2003-01-01

    We compare dilute-limit segregation energies for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation. Results are discussed within the context of segregation models driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  20. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.

  1. Analytical investigation of different mathematical approaches utilizing manipulation of ratio spectra

    NASA Astrophysics Data System (ADS)

    Osman, Essam Eldin A.

    2018-01-01

    This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  2. Comparison Of Methods Used In Cartography For The Skeletonisation Of Areal Objects

    NASA Astrophysics Data System (ADS)

    Szombara, Stanisław

    2015-12-01

    The article presents a method that would compare skeletonisation methods for areal objects. The skeleton of an areal object, being its linear representation, is used, among others, in cartographic visualisation. The method allows us to compare between any skeletonisation methods in terms of the deviations of distance differences between the skeleton of the object and its border from one side and the distortions of skeletonisation from another. In the article, 5 methods were compared: Voronoi diagrams, densified Voronoi diagrams, constrained Delaunay triangulation, Straight Skeleton and Medial Axis (Transform). The results of comparison were presented on the example of several areal objects. The comparison of the methods showed that in all the analysed objects the Medial Axis (Transform) gives the smallest distortion and deviation values, which allows us to recommend it.

  3. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    NASA Astrophysics Data System (ADS)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  4. Comparability among four invertebrate sampling methods and two multimetric indexes, Fountain Creek Basin, Colorado, 2010–2012

    USGS Publications Warehouse

    Bruce, James F.; Roberts, James J.; Zuellig, Robert E.

    2018-05-24

    The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.

  5. Natural Language Processing As an Alternative to Manual Reporting of Colonoscopy Quality Metrics

    PubMed Central

    RAJU, GOTTUMUKKALA S.; LUM, PHILLIP J.; SLACK, REBECCA; THIRUMURTHI, SELVI; LYNCH, PATRICK M.; MILLER, ETHAN; WESTON, BRIAN R.; DAVILA, MARTA L.; BHUTANI, MANOOP S.; SHAFI, MEHNAZ A.; BRESALIER, ROBERT S.; DEKOVICH, ALEXANDER A.; LEE, JEFFREY H.; GUHA, SUSHOVAN; PANDE, MALA; BLECHACZ, BORIS; RASHID, ASIF; ROUTBORT, MARK; SHUTTLESWORTH, GLADIS; MISHRA, LOPA; STROEHLEIN, JOHN R.; ROSS, WILLIAM A.

    2015-01-01

    BACKGROUND & AIMS The adenoma detection rate (ADR) is a quality metric tied to interval colon cancer occurrence. However, manual extraction of data to calculate and track the ADR in clinical practice is labor-intensive. To overcome this difficulty, we developed a natural language processing (NLP) method to identify patients, who underwent their first screening colonoscopy, identify adenomas and sessile serrated adenomas (SSA). We compared the NLP generated results with that of manual data extraction to test the accuracy of NLP, and report on colonoscopy quality metrics using NLP. METHODS Identification of screening colonoscopies using NLP was compared with that using the manual method for 12,748 patients who underwent colonoscopies from July 2010 to February 2013. Also, identification of adenomas and SSAs using NLP was compared with that using the manual method with 2259 matched patient records. Colonoscopy ADRs using these methods were generated for each physician. RESULTS NLP correctly identified 91.3% of the screening examinations, whereas the manual method identified 87.8% of them. Both the manual method and NLP correctly identified examinations of patients with adenomas and SSAs in the matched records almost perfectly. Both NLP and manual method produce comparable values for ADR for each endoscopist as well as the group as a whole. CONCLUSIONS NLP can correctly identify screening colonoscopies, accurately identify adenomas and SSAs in a pathology database, and provide real-time quality metrics for colonoscopy. PMID:25910665

  6. Infared beak treatment method compared with conventional hot blade amputation in laying hens

    USDA-ARS?s Scientific Manuscript database

    Infrared lasers have been widely used for noninvasive surgical applications in human medicine and their results are reliable, predictable and reproducible. Infrared lasers have recently been designed with the expressed purpose of providing a less painful, more precise beak trimming method compared w...

  7. Comparative effectiveness research and its utility in In-clinic practice

    PubMed Central

    Dang, Amit; Kaur, Kirandeep

    2016-01-01

    One of the important components of patient-centered healthcare is comparative effectiveness research (CER), which aims at generating evidence from the real-life setting. The primary purpose of CER is to provide comparative information to the healthcare providers, patients, and policy makers about the standard of care available. This involves research on clinical questions unanswered by the explanatory trials during the regulatory approval process. Main methods of CER involve randomized controlled trials and observational methods. The limitations of these two methods have been overcome with the help of new statistical methods. After the evidence generation, it is equally important to communicate the results to all the interested organizations. CER is beginning to have its impact in the clinical practice as its results become part of the clinical practice guidelines. CER will have far-reaching scientific and financial impact. CER will make both the treating physician and the patient equally responsible for the treatment offered. PMID:26955571

  8. Comparative evaluation of ultrasound scanner accuracy in distance measurement

    NASA Astrophysics Data System (ADS)

    Branca, F. P.; Sciuto, S. A.; Scorza, A.

    2012-10-01

    The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.

  9. Evaluation of 3 dental unit waterline contamination testing methods

    PubMed Central

    Porteous, Nuala; Sun, Yuyu; Schoolfield, John

    2015-01-01

    Previous studies have found inconsistent results from testing methods used to measure heterotrophic plate count (HPC) bacteria in dental unit waterline (DUWL) samples. This study used 63 samples to compare the results obtained from an in-office chairside method and 2 currently used commercial laboratory HPC methods (Standard Methods 9215C and 9215E). The results suggest that the Standard Method 9215E is not suitable for application to DUWL quality monitoring, due to the detection of limited numbers of heterotrophic organisms at the required 35°C incubation temperature. The results also confirm that while the in-office chairside method is useful for DUWL quality monitoring, the Standard Method 9215C provided the most accurate results. PMID:25574718

  10. A flexible statistical model for alignment of label-free proteomics data--incorporating ion mobility and product ion information.

    PubMed

    Benjamin, Ashlee M; Thompson, J Will; Soderblom, Erik J; Geromanos, Scott J; Henao, Ricardo; Kraus, Virginia B; Moseley, M Arthur; Lucas, Joseph E

    2013-12-16

    The goal of many proteomics experiments is to determine the abundance of proteins in biological samples, and the variation thereof in various physiological conditions. High-throughput quantitative proteomics, specifically label-free LC-MS/MS, allows rapid measurement of thousands of proteins, enabling large-scale studies of various biological systems. Prior to analyzing these information-rich datasets, raw data must undergo several computational processing steps. We present a method to address one of the essential steps in proteomics data processing--the matching of peptide measurements across samples. We describe a novel method for label-free proteomics data alignment with the ability to incorporate previously unused aspects of the data, particularly ion mobility drift times and product ion information. We compare the results of our alignment method to PEPPeR and OpenMS, and compare alignment accuracy achieved by different versions of our method utilizing various data characteristics. Our method results in increased match recall rates and similar or improved mismatch rates compared to PEPPeR and OpenMS feature-based alignment. We also show that the inclusion of drift time and product ion information results in higher recall rates and more confident matches, without increases in error rates. Based on the results presented here, we argue that the incorporation of ion mobility drift time and product ion information are worthy pursuits. Alignment methods should be flexible enough to utilize all available data, particularly with recent advancements in experimental separation methods.

  11. VIDAS Listeria species Xpress (LSX).

    PubMed

    Johnson, Ronald; Mills, John

    2013-01-01

    The AOAC GovVal study compared the VIDAS Listeria species Xpress (LSX) to the Health Products and Food Branch MFHPB-30 reference method for detection of Listeria on stainless steel. The LSX method utilizes a novel and proprietary enrichment media, Listeria Xpress broth, enabling detection of Listeria species in environmental samples with the automated VIDAS in a minimum of 26 h. The LSX method also includes the use of the chromogenic media, chromID Ottaviani Agosti Agar (OAA) and chromID Lmono for confirmation of LSX presumptive results. In previous AOAC validation studies comparing VIDAS LSX to the U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA-BAM) and the U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) reference methods, the LSX method was approved as AOAC Official Method 2010.02 for the detection of Listeria species in dairy products, vegetables, seafood, raw meats and poultry, and processed meats and poultry, and as AOAC Performance Tested Method 100501 in a variety of foods and on environmental surfaces. The GovVal comparative study included 20 replicate test portions each at two contamination levels for stainless steel where fractionally positive results (5-15 positive results/20 replicate portions tested) were obtained by at least one method at one level. Five uncontaminated controls were included. In the stainless steel artificially contaminated surface study, there were 25 confirmed positives by the VIDAS LSX assay and 22 confirmed positives by the standard culture methods. Chi-square analysis indicated no statistical differences between the VIDAS LSX method and the MFHPB-30 standard methods at the 5% level of significance. Confirmation of presumptive LSX results with the chromogenic OAA and Lmono media was shown to be equivalent to the appropriate reference method agars. The data in this study demonstrate that the VIDAS LSX method is an acceptable alternative method to the MFHPB-30 standard culture method for the detection of Listeria species on stainless steel.

  12. Phosphorus Concentrations in Stream-Water and Reference Samples - An Assessment of Laboratory Comparability

    USGS Publications Warehouse

    McHale, Michael R.; McChesney, Dennis

    2007-01-01

    In 2003, a study was conducted to evaluate the accuracy and precision of 10 laboratories that analyze water-quality samples for phosphorus concentrations in the Catskill Mountain region of New York State. Many environmental studies in this region rely on data from these different laboratories for water-quality analyses, and the data may be used in watershed modeling and management decisions. Therefore, it is important to determine whether the data reported by these laboratories are of comparable accuracy and precision. Each laboratory was sent 12 samples for triplicate analysis for total phosphorus, total dissolved phosphorus, and soluble reactive phosphorus. Eight of these laboratories reported results that met comparability criteria for all samples; the remaining two laboratories met comparability criteria for only about half of the analyses. Neither the analytical method used nor the sample concentration ranges appeared to affect the comparability of results. The laboratories whose results were comparable gave consistently comparable results throughout the concentration range analyzed, and the differences among methods did not diminish comparability. All laboratories had high data precision as indicated by sample triplicate results. In addition, the laboratories consistently reported total phosphorus values greater than total dissolved phosphorus values, and total dissolved phosphorus values greater than soluble reactive phosphorus values, as would be expected. The results of this study emphasize the importance of regular laboratory participation in sample-exchange programs.

  13. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  14. Quantitative evaluation of the CEEM soil sampling intercomparison.

    PubMed

    Wagner, G; Lischer, P; Theocharopoulos, S; Muntau, H; Desaules, A; Quevauviller, P

    2001-01-08

    The aim of the CEEM soil project was to compare and to test the soil sampling and sample preparation guidelines used in the member states of the European Union and Switzerland for investigations of background and large-scale contamination of soils, soil monitoring and environmental risk assessments. The results of the comparative evaluation of the sampling guidelines demonstrated that, in soil contamination studies carried out with different sampling strategies and methods, comparable results can hardly be expected. Therefore, a reference database (RDB) was established by the organisers, which acted as a basis for the quantitative comparison of the participants' results. The detected deviations were related to the methodological details of the individual strategies. The comparative evaluation concept consisted of three steps: The first step was a comparison of the participants' samples (which were both centrally and individually analysed) between each other, as well as with the reference data base (RDB) and some given soil quality standards on the level of concentrations present. The comparison was made using the example of the metals cadmium, copper, lead and zinc. As a second step, the absolute and relative deviations between the reference database and the participants' results (both centrally analysed under repeatability conditions) were calculated. The comparability of the samples with the RDB was categorised on four levels. Methods of exploratory statistical analysis were applied to estimate the differential method bias among the participants. The levels of error caused by sampling and sample preparation were compared with those caused by the analytical procedures. As a third step, the methodological profiles of the participants were compiled to concisely describe the different procedures used. They were related to the results to find out the main factors leading to their incomparability. The outcome of this evaluation process was a list of strategies and methods, which are problematic with respect to comparability, and should be standardised and/or specified in order to arrive at representative and comparable results in soil contamination studies throughout Europe. Pre-normative recommendations for harmonising European soil sampling guidelines and standard operating procedures have been outlined in Wagner G, Desules A, Muntau H, Theocharopoulos S. Comparative Evaluation of European Methods for Sampling and Sample Preparation of Soils for Inorganic Analysis (CEEM Soil). Final Report of the Contract SMT4-CT96-2085, Sci Total Environ 2001;264:181-186. Wagner G, Desaules A, Munatu H. Theocharopolous S, Quevauvaller Ph. Suggestions for harmonising sampling and sample pre-treatment procedures and improving quality assurance in pre-analytical steps of soil contamination studies. Paper 1.7 Sci Total Environ 2001b;264:103-118.

  15. Validation of odor concentration from mechanical-biological treatment piles using static chamber and wind tunnel with different wind speed values.

    PubMed

    Szyłak-Szydłowski, Mirosław

    2017-09-01

    The basic principle of odor sampling from surface sources is based primarily on the amount of air obtained from a specific area of the ground, which acts as a source of malodorous compounds. Wind tunnels and flux chambers are often the only available, direct method of evaluating the odor fluxes from small area sources. There are currently no widely accepted chamber-based methods; thus, there is still a need for standardization of these methods to ensure accuracy and comparability. Previous research has established that there is a significant difference between the odor concentration values obtained using the Lindvall chamber and those obtained by a dynamic flow chamber. Thus, the present study compares sampling methods using a streaming chamber modeled on the Lindvall cover (using different wind speeds), a static chamber, and a direct sampling method without any screens. The volumes of chambers in the current work were similar, ~0.08 m 3 . This study was conducted at the mechanical-biological treatment plant in Poland. Samples were taken from a pile covered by the membrane. Measured odor concentration values were between 2 and 150 ou E /m 3 . Results of the study demonstrated that both chambers can be used interchangeably in the following conditions: odor concentration is below 60 ou E /m 3 , wind speed inside the Lindvall chamber is below 0.2 m/sec, and a flow value is below 0.011 m 3 /sec. Increasing the wind speed above the aforementioned value results in significant differences in the results obtained between those methods. In all experiments, the results of the concentration of odor in the samples using the static chamber were consistently higher than those from the samples measured in the Lindvall chamber. Lastly, the results of experiments were employed to determine a model function of the relationship between wind speed and odor concentration values. Several researchers wrote that there are no widely accepted chamber-based methods. Also, there is still a need for standardization to ensure full comparability of these methods. The present study compared the existing methods to improve the standardization of area source sampling. The practical usefulness of the results was proving that both examined chambers can be used interchangeably. Statistically similar results were achieved while odor concentration was below 60 ou E /m 3 and wind speed inside the Lindvall chamber was below 0.2 m/sec. Increasing wind speed over these values results in differences between these methods. A model function of relationship between wind speed and odor concentration value was determined.

  16. Calibration of BCR-ABL1 mRNA quantification methods using genetic reference materials is a valid strategy to report results on the international scale.

    PubMed

    Mauté, Carole; Nibourel, Olivier; Réa, Delphine; Coiteux, Valérie; Grardel, Nathalie; Preudhomme, Claude; Cayuela, Jean-Michel

    2014-09-01

    Until recently, diagnostic laboratories that wanted to report on the international scale had limited options: they had to align their BCR-ABL1 quantification methods through a sample exchange with a reference laboratory to derive a conversion factor. However, commercial methods calibrated on the World Health Organization genetic reference panel are now available. We report results from a study designed to assess the comparability of the two alignment strategies. Sixty follow-up samples from chronic myeloid leukemia patients were included. Two commercial methods calibrated on the genetic reference panel were compared to two conversion factor methods routinely used at Saint-Louis Hospital, Paris, and at Lille University Hospital. Results were matched against concordance criteria (i.e., obtaining at least two of the three following landmarks: 50, 75 and 90% of the patient samples within a 2-fold, 3-fold and 5-fold range, respectively). Out of the 60 samples, more than 32 were available for comparison. Compared to the conversion factor method, the two commercial methods were within a 2-fold, 3-fold and 5-fold range for 53 and 59%, 89 and 88%, 100 and 97%, respectively of the samples analyzed at Saint-Louis. At Lille, results were 45 and 85%, 76 and 97%, 100 and 100%, respectively. Agreements between methods were observed in the four comparisons performed. Our data show that the two commercial methods selected are concordant with the conversion factor methods. This study brings the proof of principle that alignment on the international scale using the genetic reference panel is compatible with the patient sample exchange procedure. We believe that these results are particularly important for diagnostic laboratories wishing to adopt commercial methods. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  17. Breast volume assessment: comparing five different techniques.

    PubMed

    Bulstrode, N; Bellamy, E; Shrotria, S

    2001-04-01

    Breast volume assessment is not routinely performed pre-operatively because as yet there is no accepted technique. There have been a variety of methods published, but this is the first study to compare these techniques. We compared volume measurements obtained from mammograms (previously compared to mastectomy specimens) with estimates of volume obtained from four other techniques: thermoplastic moulding, magnetic resonance imaging, Archimedes principle and anatomical measurements. We also assessed the acceptability of each method to the patient. Measurements were performed on 10 women, which produced results for 20 breasts. We were able to calculate regression lines between volume measurements obtained from mammography to the other four methods: (1) magnetic resonance imaging (MRI), 379+(0.75 MRI) [r=0.48], (2) Thermoplastic moulding, 132+(1.46 Thermoplastic moulding) [r=0.82], (3) Anatomical measurements, 168+(1.55 Anatomical measurements) [r=0.83]. (4) Archimedes principle, 359+(0.6 Archimedes principle) [r=0.61] all units in cc. The regression curves for the different techniques are variable and it is difficult to reliably compare results. A standard method of volume measurement should be used when comparing volumes before and after intervention or between individual patients, and it is unreliable to compare volume measurements using different methods. Calculating the breast volume from mammography has previously been compared to mastectomy samples and shown to be reasonably accurate. However we feel thermoplastic moulding shows promise and should be further investigated as it gives not only a volume assessment but a three-dimensional impression of the breast shape, which may be valuable in assessing cosmesis following breast-conserving-surgery.

  18. Quantitative analysis of Si1-xGex alloy films by SIMS and XPS depth profiling using a reference material

    NASA Astrophysics Data System (ADS)

    Oh, Won Jin; Jang, Jong Shik; Lee, Youn Seoung; Kim, Ansoon; Kim, Kyung Joong

    2018-02-01

    Quantitative analysis methods of multi-element alloy films were compared. The atomic fractions of Si1-xGex alloy films were measured by depth profiling analysis with secondary ion mass spectrometry (SIMS) and X-ray Photoelectron Spectroscopy (XPS). Intensity-to-composition conversion factor (ICF) was used as a mean to convert the intensities to compositions instead of the relative sensitivity factors. The ICFs were determined from a reference Si1-xGex alloy film by the conventional method, average intensity (AI) method and total number counting (TNC) method. In the case of SIMS, although the atomic fractions measured by oxygen ion beams were not quantitative due to severe matrix effect, the results by cesium ion beam were very quantitative. The quantitative analysis results by SIMS using MCs2+ ions are comparable to the results by XPS. In the case of XPS, the measurement uncertainty was highly improved by the AI method and TNC method.

  19. Multiparametric comparison of chromogenic-based culture methods used to assess the microbiological quality of drinking water and the mFC method combined with a molecular confirmation procedure.

    PubMed

    Maheux, Andrée F; Dion-Dupont, Vanessa; Bisson, Marc-Antoine; Bouchard, Sébastien; Jubinville, Éric; Nkuranga, Martine; Rodrigue, Lynda; Bergeron, Michel G; Rodriguez, Manuel J

    2015-03-01

    MI agar and Colilert(®), as well as mFC agar combined with an Escherichia coli-specific molecular assay (mFC + E. coli rtPCR), were compared in terms of their sensitivity, ease of use, time to result and affordability. The three methods yielded a positive E. coli signal for 11.5, 10.8, and 11.5% of the 968 well water samples tested, respectively. One hundred and thirty-six (136) samples gave blue colonies on mFC agar and required confirmation. E. coli-specific rtPCR showed false-positive results in 23.5% (32/136) of cases. In terms of ease of use, Colilert was the simplest method to use while the MI method provided ease of use comparable to all membrane filtration methods. However, the mFC + E. coli rtPCR assay required highly trained employees for confirmation purposes. In terms of affordability, and considering contamination rate of well water samples tested, the Colilert method and the mFC + E. coli rtPCR assay were at least five times more costly than the MI agar method. Overall, compared with the other two methods tested, the MI agar method offers the most advantages to assess drinking water quality.

  20. Gradient-based Electrical Properties Tomography (gEPT): a Robust Method for Mapping Electrical Properties of Biological Tissues In Vivo Using Magnetic Resonance Imaging

    PubMed Central

    Liu, Jiaen; Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortele, Pierre-Francois; He, Bin

    2014-01-01

    Purpose To develop high-resolution electrical properties tomography (EPT) methods and investigate a gradient-based EPT (gEPT) approach which aims to reconstruct the electrical properties (EP), including conductivity and permittivity, of an imaged sample from experimentally measured B1 maps with improved boundary reconstruction and robustness against measurement noise. Theory and Methods Using a multi-channel transmit/receive stripline head coil, with acquired B1 maps for each coil element, by assuming negligible Bz component compared to transverse B1 components, a theory describing the relationship between B1 field, EP value and their spatial gradient has been proposed. The final EP images were obtained through spatial integration over the reconstructed EP gradient. Numerical simulation, physical phantom and in vivo human experiments at 7 T have been conducted to evaluate the performance of the proposed methods. Results Reconstruction results were compared with target EP values in both simulations and phantom experiments. Human experimental results were compared with EP values in literature. Satisfactory agreement was observed with improved boundary reconstruction. Importantly, the proposed gEPT method proved to be more robust against noise when compared to previously described non-gradient-based EPT approaches. Conclusion The proposed gEPT approach holds promises to improve EP mapping quality by recovering the boundary information and enhancing robustness against noise. PMID:25213371

  1. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  2. Referenceless MR thermometry-a comparison of five methods.

    PubMed

    Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho

    2017-01-07

    Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.

  3. Referenceless MR thermometry—a comparison of five methods

    NASA Astrophysics Data System (ADS)

    Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho

    2017-01-01

    Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.

  4. Underground Mining Method Selection Using WPM and PROMETHEE

    NASA Astrophysics Data System (ADS)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  5. Medical students’ attitudes and perspectives regarding novel computer-based practical spot tests compared to traditional practical spot tests

    PubMed Central

    Wijerathne, Buddhika; Rathnayake, Geetha

    2013-01-01

    Background Most universities currently practice traditional practical spot tests to evaluate students. However, traditional methods have several disadvantages. Computer-based examination techniques are becoming more popular among medical educators worldwide. Therefore incorporating the computer interface in practical spot testing is a novel concept that may minimize the shortcomings of traditional methods. Assessing students’ attitudes and perspectives is vital in understanding how students perceive the novel method. Methods One hundred and sixty medical students were randomly allocated to either a computer-based spot test (n=80) or a traditional spot test (n=80). The students rated their attitudes and perspectives regarding the spot test method soon after the test. The results were described comparatively. Results Students had higher positive attitudes towards the computer-based practical spot test compared to the traditional spot test. Their recommendations to introduce the novel practical spot test method for future exams and to other universities were statistically significantly higher. Conclusions The computer-based practical spot test is viewed as more acceptable to students than the traditional spot test. PMID:26451213

  6. Statistical analysis of activation and reaction energies with quasi-variational coupled-cluster theory

    NASA Astrophysics Data System (ADS)

    Black, Joshua A.; Knowles, Peter J.

    2018-06-01

    The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.

  7. Comparison of Nested PCR and RFLP for Identification and Classification of Malassezia Yeasts from Healthy Human Skin

    PubMed Central

    Oh, Byung Ho; Song, Young Chan; Choe, Yong Beom; Ahn, Kyu Joong

    2009-01-01

    Background Malassezia yeasts are normal flora of the skin found in 75~98% of healthy subjects. The accurate identification of the Malassezia species is important for determining the pathogenesis of the Malassezia yeasts with regard to various skin diseases such as Malassezia folliculitis, seborrheic dermatitis, and atopic dermatitis. Objective This research was conducted to determine a more accurate and rapid molecular test for the identification and classification of Malassezia yeasts. Methods We compared the accuracy and efficacy of restriction fragment length polymorphism (RFLP) and the nested polymerase chain reaction (PCR) for the identification of Malassezia yeasts. Results Although both methods demonstrated rapid and reliable results with regard to identification, the nested PCR method was faster. However, 7 different Malassezia species (1.2%) were identified by the nested PCR compared to the RFLP method. Conclusion Our results show that RFLP method was relatively more accurate and reliable for the detection of various Malassezia species compared to the nested PCR. But, in the aspect of simplicity and time saving, the latter method has its own advantages. In addition, the 26S rDNA, which was targeted in this study, contains highly conserved base sequences and enough sequence variation for inter-species identification of Malassezia yeasts. PMID:20523823

  8. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  9. A Comparative study of two RVE modelling methods for chopped carbon fiber SMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zhangxing; Li, Yi; Shao, Yimin

    To achieve vehicle light-weighting, the chopped carbon fiber sheet molding compound (SMC) is identified as a promising material to replace metals. However, there are no effective tools and methods to predict the mechanical property of the chopped carbon fiber SMC due to the high complexity in microstructure features and the anisotropic properties. In this paper, the Representative Volume Element (RVE) approach is used to model the SMC microstructure. Two modeling methods, the Voronoi diagram-based method and the chip packing method, are developed for material RVE property prediction. The two methods are compared in terms of the predicted elastic modulus andmore » the predicted results are validated using the Digital Image Correlation (DIC) tensile test results. Furthermore, the advantages and shortcomings of these two methods are discussed in terms of the required input information and the convenience of use in the integrated processing-microstructure-property analysis.« less

  10. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.

    PubMed

    Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak

    2006-06-06

    To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence information. This method may yield further information about biological evolution, such as the history of horizontal transfer of each gene, by studying the detailed structure of the phylogenetic tree constructed by the kernel-based method.

  11. [Comparison between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection].

    PubMed

    Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong

    2006-07-01

    To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.

  12. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  13. Representing ductile damage with the dual domain material point method

    DOE PAGES

    Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...

    2015-12-14

    In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less

  14. A Comparative Study of Measuring Devices Used During Space Shuttle Processing for Inside Diameters

    NASA Technical Reports Server (NTRS)

    Rodriguez, Antonio

    2006-01-01

    During Space Shuttle processing, discrepancies between vehicle dimensions and per print dimensions determine if a part should be refurbished, replaced or accepted "as-is." The engineer's job is to address each discrepancy by choosing the most accurate procedure and tool available, sometimes with up to ten thousands of an inch tolerance. Four methods of measurement are commonly used at the Kennedy Space Center: 1) caliper, 2) mold impressions, 3) optical comparator, 4) dial bore gage. During a problem report evaluation, uncertainty arose between methods after measuring diameters with variations of up to 0.0004" inches. The results showed that computer based measuring devices are extremely accurate, but when human factor is involved in determining points of reference, the results may vary widely compared to more traditional methods. iv

  15. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  16. Comparing In-Class and Out-of-Class Computer-Based Tests to Traditional Paper-and-Pencil Tests in Introductory Psychology Courses

    ERIC Educational Resources Information Center

    Frein, Scott T.

    2011-01-01

    This article describes three experiments comparing paper-and-pencil tests (PPTs) to computer-based tests (CBTs) in terms of test method preferences and student performance. In Experiment 1, students took tests using three methods: PPT in class, CBT in class, and CBT at the time and place of their choosing. Results indicate that test method did not…

  17. Lenke and King classification systems for adolescent idiopathic scoliosis: interobserver agreement and postoperative results

    PubMed Central

    Hosseinpour-Feizi, Hojjat; Soleimanpour, Jafar; Sales, Jafar Ganjpour; Arzroumchilar, Ali

    2011-01-01

    Purpose The aim of this study was to investigate the interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis, and to compare the results of surgery performed based on classification of the scoliosis according to each of these classification systems. Methods The study was conducted in Shohada Hospital in Tabriz, Iran, between 2009 and 2010. First, a reliability assessment was undertaken to assess interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis. Second, postoperative efficacy and safety of surgery performed based on the Lenke and King classifications were compared. Kappa coefficients of agreement were calculated to assess the agreement. Outcomes were compared using bivariate tests and repeated measures analysis of variance. Results A low to moderate interobserver agreement was observed for the King classification; the Lenke classification yielded mostly high agreement coefficients. The outcome of surgery was not found to be substantially different between the two systems. Conclusion Based on the results, the Lenke classification method seems advantageous. This takes into consideration the Lenke classification’s priority in providing details of curvatures in different anatomical surfaces to explain precise intensity of scoliosis, that it has higher interobserver agreement scores, and also that it leads to noninferior postoperative results compared with the King classification method. PMID:22267934

  18. Numerical method to compute acoustic scattering effect of a moving source.

    PubMed

    Song, Hao; Yi, Mingxu; Huang, Jun; Pan, Yalin; Liu, Dawei

    2016-01-01

    In this paper, the aerodynamic characteristic of a ducted tail rotor in hover has been numerically studied using CFD method. An analytical time domain formulation based on Ffowcs Williams-Hawkings (FW-H) equation is derived for the prediction of the acoustic velocity field and used as Neumann boundary condition on a rigid scattering surface. In order to predict the aerodynamic noise, a hybrid method combing computational aeroacoustics with an acoustic thin-body boundary element method has been proposed. The aerodynamic results and the calculated sound pressure levels (SPLs) are compared with the known method for validation. Simulation results show that the duct can change the value of SPLs and the sound directivity. Compared with the isolate tail rotor, the SPLs of the ducted tail rotor are smaller at certain azimuth.

  19. Reply to “Ranking filter methods for concentrating pathogens in lake water”

    USGS Publications Warehouse

    Bushon, Rebecca N.; Francy, Donna S.; Gallardo, Vicente J.; Lindquist, H.D. Alan; Villegas, Eric N.; Ware, Michael W.

    2013-01-01

    Accurately comparing filtration methods is indeed difficult. Our method (1) and the method described by Borchardt et al. for determining recoveries are both acceptable approaches; however, each is designed to achieve a different research goal. Our study was designed to compare recoveries of multiple microorganisms in surface-water samples. Because, in practice, water-matrix effects come into play throughout filtration, concentration, and detection processes, we felt it important to incorporate those effects into the recovery results.

  20. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  1. L-Phenylalanine concentration in blood of phenylketonuria patients: a modified enzyme colorimetric assay compared with amino acid analysis, tandem mass spectrometry, and HPLC methods.

    PubMed

    De Silva, Veronica; Oldham, Charlie D; May, Sheldon W

    2010-09-01

    Phenylketonuria (PKU) is an autosomal recessive disorder caused by an impaired conversion of L-phenylalanine (Phe) to L-tyrosine, typically resulting from a deficiency in activity of a hepatic and renal enzyme L-phenylalanine hydroxylase. The disease is characterized by an increased concentration of Phe and its metabolites in body fluids. A modified assay based on an enzymatic-colorimetric methodology was developed for measuring blood Phe levels in PKU patients; this method is designed for use with undeproteinized samples and avoids the use of solvents or amphiphilic agents. Thus, the method could be suitable for incorporation into a simple home-monitoring device. We report here on a comparison of blood Phe concentrations in PKU patients measured in undeproteinized plasma using this enzyme colorimetric assay (ECA), with values determined by amino acid analysis (AAA) of deproteinized samples, and HPLC and tandem mass spectrometry (MS/MS) analyses of dried blood spot (DBS) eluates. Pearson correlation coefficients of 0.951, 0.976 and 0.988 were obtained when AAA-measured Phe concentrations were compared with the ECA-, HPLC- or MS/MS-measured values, respectively. A Bland-Altman analysis revealed that mean Phe concentrations determined using AAA were on average 65 μmol/L lower than values measured by our ECA. These results may be the result of minimizing the manipulations performed on the patient sample compared with AAA, HPLC, and MS/MS methods, which involve plasma deproteinization or DBS elution and derivatization. The results reported here confirm that Phe concentrations determined by our ECA method are comparable to those determined by other widely used methods for a broad range of plasma Phe concentrations.

  2. U.S. Geological Survey experience with the residual absolutes method

    NASA Astrophysics Data System (ADS)

    Worthington, E. William; Matzka, Jürgen

    2017-10-01

    The U.S. Geological Survey (USGS) Geomagnetism Program has developed and tested the residual method of absolutes, with the assistance of the Danish Technical University's (DTU) Geomagnetism Program. Three years of testing were performed at College Magnetic Observatory (CMO), Fairbanks, Alaska, to compare the residual method with the null method. Results show that the two methods compare very well with each other and both sets of baseline data were used to process the 2015 definitive data. The residual method will be implemented at the other USGS high-latitude geomagnetic observatories in the summer of 2017 and 2018.

  3. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  4. Comparison of urine analysis using manual and sedimentation methods.

    PubMed

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  5. Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.

    PubMed

    Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza

    2017-12-01

    Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.

  6. MicroSEQ® Salmonella spp. Detection Kit Using the Pathatrix® 10-Pooling Salmonella spp. Kit Linked Protocol Method Modification.

    PubMed

    Wall, Jason; Conrad, Rick; Latham, Kathy; Liu, Eric

    2014-03-01

    Real-time PCR methods for detecting foodborne pathogens offer the advantages of simplicity and quick time to results compared to traditional culture methods. The addition of a recirculating pooled immunomagnetic separation method prior to real-time PCR analysis increases processing output while reducing both cost and labor. This AOAC Research Institute method modification study validates the MicroSEQ® Salmonella spp. Detection Kit [AOAC Performance Tested Method (PTM) 031001] linked with the Pathatrix® 10-Pooling Salmonella spp. Kit (AOAC PTM 090203C) in diced tomatoes, chocolate, and deli ham. The Pathatrix 10-Pooling protocol represents a method modification of the enrichment portion of the MicroSEQ Salmonella spp. The results of the method modification were compared to standard cultural reference methods for diced tomatoes, chocolate, and deli ham. All three matrixes were analyzed in a paired study design. An additional set of chocolate test portions was analyzed using an alternative enrichment medium in an unpaired study design. For all matrixes tested, there were no statistically significant differences in the number of positive test portions detected by the modified candidate method compared to the appropriate reference method. The MicroSEQ Salmonella spp. protocol linked with the Pathatrix individual or 10-Pooling procedure demonstrated reliability as a rapid, simplified, method for the preparation of samples and subsequent detection of Salmonella in diced tomatoes, chocolate, and deli ham.

  7. Comparison of Observational Methods and Their Relation to Ratings of Engagement in Young Children

    ERIC Educational Resources Information Center

    Wood, Brenna K.; Hojnoski, Robin L.; Laracy, Seth D.; Olson, Christopher L.

    2016-01-01

    Although, collectively, results of earlier direct observation studies suggest momentary time sampling (MTS) may offer certain technical advantages over whole-interval (WIR) and partial-interval (PIR) recording, no study has compared these methods for measuring engagement in young children in naturalistic environments. This study compared direct…

  8. Programmed Instruction in Secondary Education: A Meta-Analysis of the Impact of Class Size on Its Effectiveness.

    ERIC Educational Resources Information Center

    Boden, Andrea; Archwamety, Teara; McFarland, Max

    This review used meta-analytic techniques to integrate findings from 30 independent studies that compared programmed instruction to conventional methods of instruction at the secondary level. The meta-analysis demonstrated that programmed instruction resulted in higher achievement when compared to conventional methods of instruction (average…

  9. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    PubMed

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Comparison of carbon and biomass estimation methods for European forests

    NASA Astrophysics Data System (ADS)

    Neumann, Mathias; Mues, Volker; Harkonen, Sanna; Mura, Matteo; Bouriaud, Olivier; Lang, Mait; Achten, Wouter; Thivolle-Cazat, Alain; Bronisz, Karol; Merganicova, Katarina; Decuyper, Mathieu; Alberdi, Iciar; Astrup, Rasmus; Schadauer, Klemens; Hasenauer, Hubert

    2015-04-01

    National and international reporting systems as well as research, enterprises and political stakeholders require information on carbon stocks of forests. Terrestrial assessment systems like forest inventory data in combination with carbon calculation methods are often used for this purpose. To assess the effect of the calculation method used, a comparative analysis was done using the carbon calculation methods from 13 European countries and the research plots from ICP Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests). These methods are applied for five European tree species (Fagus sylvatica L., Quercus robur L., Betula pendula Roth, Picea abies (L.) Karst. and Pinus sylvestris L.) using a standardized theoretical tree dataset to avoid biases due to data collection and sample design. The carbon calculation methods use allometric biomass and volume functions, carbon and biomass expansion factors or a combination thereof. The results of the analysis show a high variation in the results for total tree carbon as well as for carbon in the single tree compartments. The same pattern is found when comparing the respective volume estimates. This is consistent for all five tree species and the variation remains when the results are grouped according to the European forest regions. Possible explanations are differences in the sample material used for the biomass models, the model variables or differences in the definition of tree compartments. The analysed carbon calculation methods have a strong effect on the results both for single trees and forest stands. To avoid misinterpretation the calculation method has to be chosen carefully along with quality checks and the calculation method needs consideration especially in comparative studies to avoid biased and misleading conclusions.

  11. Emergent surgical airway: comparison of the three-step method and conventional cricothyroidotomy utilizing high-fidelity simulation.

    PubMed

    Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L

    2014-02-01

    Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.

  12. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    PubMed

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Integrated force method versus displacement method for finite element analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Berke, L.; Gallagher, R. H.

    1991-01-01

    A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EEs) are integrated with the global compatibility conditions (CCs) to form the governing set of equations. In IFM the CCs are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.

  14. Integrated force method versus displacement method for finite element analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.

    1990-01-01

    A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.

  15. Comparisons of forecasting for hepatitis in Guangxi Province, China by using three neural networks models.

    PubMed

    Gan, Ruijing; Chen, Ni; Huang, Daizheng

    2016-01-01

    This study compares and evaluates the prediction of hepatitis in Guangxi Province, China by using back propagation neural networks based genetic algorithm (BPNN-GA), generalized regression neural networks (GRNN), and wavelet neural networks (WNN). In order to compare the results of forecasting, the data obtained from 2004 to 2013 and 2014 were used as modeling and forecasting samples, respectively. The results show that when the small data set of hepatitis has seasonal fluctuation, the prediction result by BPNN-GA will be better than the two other methods. The WNN method is suitable for predicting the large data set of hepatitis that has seasonal fluctuation and the same for the GRNN method when the data increases steadily.

  16. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    PubMed

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  17. Simultaneous analysis of 70 pesticides using HPlc/MS/MS: a comparison of the multiresidue method of Klein and Alder and the QuEChERS method.

    PubMed

    Riedel, Melanie; Speer, Karl; Stuke, Sven; Schmeer, Karl

    2010-01-01

    Since 2003, two new multipesticide residue methods for screening crops for a large number of pesticides, developed by Klein and Alder and Anastassiades et al. (Quick, Easy, Cheap, Effective, Rugged, and Safe; QuEChERS), have been published. Our intention was to compare these two important methods on the basis of their extraction efficiency, reproducibility, ruggedness, ease of use, and speed. In total, 70 pesticides belonging to numerous different substance classes were analyzed at two concentration levels by applying both methods, using five different representative matrixes. In the case of the QuEChERS method, the results of the three sample preparation steps (crude extract, extract after SPE, and extract after SPE and acidification) were compared with each other and with the results obtained with the Klein and Alder method. The extraction efficiencies of the QuEChERS method were far higher, and the sample preparation was much quicker when the last two steps were omitted. In most cases, the extraction efficiencies after the first step were approximately 100%. With extraction efficiencies of mostly less than 70%, the Klein and Alder method did not compare favorably. Some analytes caused problems during evaluation, mostly due to matrix influences.

  18. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  19. Comparison of the various methods for the direct calculation of the transmission functions of the 15-micron CO2 band with experimental data

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Various methods for calculating the transmission functions of the 15 micron CO2 band are described. The results of these methods are compared with laboratory measurements. It is found that program P4 provides the best agreement with experimental results on the average.

  20. Comparison of Nonoverlap Methods for Identifying Treatment Effect in Single-Subject Experimental Research

    ERIC Educational Resources Information Center

    Rakap, Salih; Snyder, Patricia; Pasia, Cathleen

    2014-01-01

    Debate is occurring about which result interpretation aides focused on examining the experimental effect should be used in single-subject experimental research. In this study, we examined seven nonoverlap methods and compared results using each method to judgments of two visual analysts. The data sources for the present study were 36 studies…

  1. Inverse and Control Problems in Electromagnetics

    DTIC Science & Technology

    1994-10-14

    monograph devoted to optimization methods in antenna theory which will be devoted, to a large extent, to the systematic exposition of the theory and...addresses the use of such methods for antenna arrays and compares these results with the well-known Dolph-Tchebyscheff result. We presented these results at...of Rome Laboratories could also be treated by multicriteria methods . It was agreed that we would collaborate on the application of the multicriteria

  2. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  3. Advanced Guidance and Control Methods for Reusable Launch Vehicles: Test Results

    NASA Technical Reports Server (NTRS)

    Hanson, John M.; Jones, Robert E.; Krupp, Don R.; Fogle, Frank R. (Technical Monitor)

    2002-01-01

    There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety/reliability and reducing the cost. In this paper, we examine some of these methods and compare the results. We briefly introduce the various methods under test, list the test cases used to demonstrate that the desired results are achieved, show an automated test scoring method that greatly reduces the evaluation effort required, and display results of the tests. Results are shown for the algorithms that have entered testing so far.

  4. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  5. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  6. Comparing High Definition Live Interactive and Store-and-Forward Consultations to In-Person Examinations

    PubMed Central

    Locatis, Craig; Burges, Gene; Maisiak, Richard; Liu, Wei-Li; Ackerman, Michael

    2017-01-01

    Abstract Background: There is little teledermatology research directly comparing remote methods, even less research with two in-person dermatologist agreement providing a baseline for comparing remote methods, and no research using high definition video as a live interactive method. Objective: To compare in-person consultations with store-and-forward and live interactive methods, the latter having two levels of image quality. Methods: A controlled study was conducted where patients were examined in-person, by high definition video, and by store-and-forward methods. The order patients experienced methods and residents assigned methods rotated, although an attending always saw patients in-person. The type of high definition video employed, lower resolution compressed or higher resolution uncompressed, was alternated between clinics. Primary and differential diagnoses, biopsy recommendations, and diagnostic and biopsy confidence ratings were recorded. Results: Concordance and confidence were significantly better for in-person versus remote methods and biopsy recommendations were lower. Store-and-forward and higher resolution uncompressed video results were similar and better than those for lower resolution compressed video. Limitations: Dermatology residents took store-and-forward photos and their quality was likely superior to those normally taken in practice. There were variations in expertise between the attending and second and third year residents. Conclusion: The superiority of in-person consultations suggests the tendencies to order more biopsies or still see patients in-person are often justified in teledermatology and that high resolution uncompressed video can close the resolution gap between store-and-forward and live interactive methods. PMID:27705083

  7. Using Variable-Length Aligned Fragment Pairs and an Improved Transition Function for Flexible Protein Structure Alignment.

    PubMed

    Cao, Hu; Lu, Yonggang

    2017-01-01

    With the rapid growth of known protein 3D structures in number, how to efficiently compare protein structures becomes an essential and challenging problem in computational structural biology. At present, many protein structure alignment methods have been developed. Among all these methods, flexible structure alignment methods are shown to be superior to rigid structure alignment methods in identifying structure similarities between proteins, which have gone through conformational changes. It is also found that the methods based on aligned fragment pairs (AFPs) have a special advantage over other approaches in balancing global structure similarities and local structure similarities. Accordingly, we propose a new flexible protein structure alignment method based on variable-length AFPs. Compared with other methods, the proposed method possesses three main advantages. First, it is based on variable-length AFPs. The length of each AFP is separately determined to maximally represent a local similar structure fragment, which reduces the number of AFPs. Second, it uses local coordinate systems, which simplify the computation at each step of the expansion of AFPs during the AFP identification. Third, it decreases the number of twists by rewarding the situation where nonconsecutive AFPs share the same transformation in the alignment, which is realized by dynamic programming with an improved transition function. The experimental data show that compared with FlexProt, FATCAT, and FlexSnap, the proposed method can achieve comparable results by introducing fewer twists. Meanwhile, it can generate results similar to those of the FATCAT method in much less running time due to the reduced number of AFPs.

  8. Application of reiteration of Hankel singular value decomposition in quality control

    NASA Astrophysics Data System (ADS)

    Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej

    2017-07-01

    Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.

  9. Comparison of epifluorescent viable bacterial count methods

    NASA Technical Reports Server (NTRS)

    Rodgers, E. B.; Huff, T. L.

    1992-01-01

    Two methods, the 2-(4-Iodophenyl) 3-(4-nitrophenyl) 5-phenyltetrazolium chloride (INT) method and the direct viable count (DVC), were tested and compared for their efficiency for the determination of the viability of bacterial populations. Use of the INT method results in the formation of a dark spot within each respiring cell. The DVC method results in elongation or swelling of growing cells that are rendered incapable of cell division. Although both methods are subjective and can result in false positive results, the DVC method is best suited to analysis of waters in which the number of different types of organisms present in the same sample is assumed to be small, such as processed waters. The advantages and disadvantages of each method are discussed.

  10. Comparing Methods for Estimating Direct Costs of Adverse Drug Events.

    PubMed

    Gyllensten, Hanna; Jönsson, Anna K; Hakkarainen, Katja M; Svensson, Staffan; Hägg, Staffan; Rehnberg, Clas

    2017-12-01

    To estimate how direct health care costs resulting from adverse drug events (ADEs) and cost distribution are affected by methodological decisions regarding identification of ADEs, assigning relevant resource use to ADEs, and estimating costs for the assigned resources. ADEs were identified from medical records and diagnostic codes for a random sample of 4970 Swedish adults during a 3-month study period in 2008 and were assessed for causality. Results were compared for five cost evaluation methods, including different methods for identifying ADEs, assigning resource use to ADEs, and for estimating costs for the assigned resources (resource use method, proportion of registered cost method, unit cost method, diagnostic code method, and main diagnosis method). Different levels of causality for ADEs and ADEs' contribution to health care resource use were considered. Using the five methods, the maximum estimated overall direct health care costs resulting from ADEs ranged from Sk10,000 (Sk = Swedish krona; ~€1,500 in 2016 values) using the diagnostic code method to more than Sk3,000,000 (~€414,000) using the unit cost method in our study population. The most conservative definitions for ADEs' contribution to health care resource use and the causality of ADEs resulted in average costs per patient ranging from Sk0 using the diagnostic code method to Sk4066 (~€500) using the unit cost method. The estimated costs resulting from ADEs varied considerably depending on the methodological choices. The results indicate that costs for ADEs need to be identified through medical record review and by using detailed unit cost data. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Acoustic pressure measurement of pulsed ultrasound using acousto-optic diffraction

    NASA Astrophysics Data System (ADS)

    Jia, Lecheng; Chen, Shili; Xue, Bin; Wu, Hanzhong; Zhang, Kai; Yang, Xiaoxia; Zeng, Zhoumo

    2018-01-01

    Compared with continuous ultrasound wave, pulsed ultrasound has been widely used in ultrasound imaging. The aim of this work is to show the applicability of acousto-optic diffraction on pulsed ultrasound transducer. In this paper, acoustic pressure of two ultrasound transducers is measured based on Raman-Nath diffraction. The frequencies of transducers are 5MHz and 10MHz. The pulse-echo method and simulation data are used to evaluate the results. The results show that the proposed method is capable to measure the absolute sound pressure. We get a sectional view of acoustic pressure using a displacement platform as an auxiliary. Compared with the traditional sound pressure measurement methods, the proposed method is non-invasive with high sensitivity and spatial resolution.

  12. Investigation of earthquake factor for optimum tuned mass dampers

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail

    2012-09-01

    In this study the optimum parameters of tuned mass dampers (TMD) are investigated under earthquake excitations. An optimization strategy was carried out by using the Harmony Search (HS) algorithm. HS is a metaheuristic method which is inspired from the nature of musical performances. In addition to the HS algorithm, the results of the optimization objective are compared with the results of the other documented method and the corresponding results are eliminated. In that case, the best optimum results are obtained. During the optimization, the optimum TMD parameters were searched for single degree of freedom (SDOF) structure models with different periods. The optimization was done for different earthquakes separately and the results were compared.

  13. Effects of Linking Methods on Detection of DIF.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    1992-01-01

    Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)

  14. Relative Contributions of Three Descriptive Methods: Implications for Behavioral Assessment

    ERIC Educational Resources Information Center

    Pence, Sacha T.; Roscoe, Eileen M.; Bourret, Jason C.; Ahearn, William H.

    2009-01-01

    This study compared the outcomes of three descriptive analysis methods--the ABC method, the conditional probability method, and the conditional and background probability method--to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior…

  15. Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.

    PubMed

    Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S

    2016-05-01

    The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  16. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  17. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  18. Application of XGBoost algorithm in hourly PM2.5 concentration prediction

    NASA Astrophysics Data System (ADS)

    Pan, Bingyue

    2018-02-01

    In view of prediction techniques of hourly PM2.5 concentration in China, this paper applied the XGBoost(Extreme Gradient Boosting) algorithm to predict hourly PM2.5 concentration. The monitoring data of air quality in Tianjin city was analyzed by using XGBoost algorithm. The prediction performance of the XGBoost method is evaluated by comparing observed and predicted PM2.5 concentration using three measures of forecast accuracy. The XGBoost method is also compared with the random forest algorithm, multiple linear regression, decision tree regression and support vector machines for regression models using computational results. The results demonstrate that the XGBoost algorithm outperforms other data mining methods.

  19. Growth of group II-VI semiconductor quantum dots with strong quantum confinement and low size dispersion

    NASA Astrophysics Data System (ADS)

    Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2003-11-01

    CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (

  20. Comparison of Three Methods of Calculation, Experimental and Monte Carlo Simulation in Investigation of Organ Doses (Thyroid, Sternum, Cervical Vertebra) in Radioiodine Therapy

    PubMed Central

    Shahbazi-Gahrouei, Daryoush; Ayat, Saba

    2012-01-01

    Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806

  1. [Comparisons of manual and automatic refractometry with subjective results].

    PubMed

    Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C

    2006-11-01

    Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.

  2. Robust estimation of pulse wave transit time using group delay.

    PubMed

    Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C

    2014-03-01

    To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.

  3. Parametric and experimental analysis using a power flow approach

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1990-01-01

    A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.

  4. Evaluation of Lysis Methods for the Extraction of Bacterial DNA for Analysis of the Vaginal Microbiota.

    PubMed

    Gill, Christina; van de Wijgert, Janneke H H M; Blow, Frances; Darby, Alistair C

    2016-01-01

    Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study.

  5. Instrumental variable methods in comparative safety and effectiveness research.

    PubMed

    Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian

    2010-06-01

    Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.

  6. Percent body fat estimations in college women using field and laboratory methods: a three-compartment model approach

    PubMed Central

    Moon, Jordan R; Hull, Holly R; Tobkin, Sarah E; Teramoto, Masaru; Karabulut, Murat; Roberts, Michael D; Ryan, Eric D; Kim, So Jung; Dalbo, Vincent J; Walter, Ashley A; Smith, Abbie T; Cramer, Joel T; Stout, Jeffrey R

    2007-01-01

    Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. This investigation sought to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age women compared to the Siri three-compartment model (3C). Methods Thirty Caucasian women (21.1 ± 1.5 yrs; 164.8 ± 4.7 cm; 61.2 ± 6.8 kg) had their %fat estimated by BIA using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), NIR (Futrex® 6100/XL), a quadratic (SF3JPW) and linear (SF3WB) skinfold equation, air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All methods produced acceptable total error (TE) values compared to the 3C model. Both laboratory methods produced similar TE values (HW, TE = 2.4%fat; BP, TE = 2.3%fat) when compared to the 3C model, though a significant constant error (CE) was detected for HW (1.5%fat, p ≤ 0.006). The field methods produced acceptable TE values ranging from 1.8 – 3.8 %fat. BIA-AK (TE = 1.8%fat) yielded the lowest TE among the field methods, while BIA-Lohman (TE = 2.1%fat) and NIR (TE = 2.7%fat) produced lower TE values than both skinfold equations (TE > 2.7%fat) compared to the 3C model. Additionally, the SF3JPW %fat estimation equation resulted in a significant CE (2.6%fat, p ≤ 0.007). Conclusion Data suggest that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian women. When the use of a laboratory method is not feasible, NIR, BIA-AK, BIA-Lohman, SF3JPW, and SF3WB are acceptable field methods to estimate %fat in this population. PMID:17988393

  7. Comparative rice seed toxicity tests using filter paper, growth pouch-tm, and seed tray methods

    USGS Publications Warehouse

    Wang, W.

    1993-01-01

    Paper substrate, especially circular filter paper placed inside a Petri dish, has long been used for the plant seed toxicity test (PSTT). Although this method is simple and inexpensive, recent evidence indicates that it gives results that are significantly different from those obtained using a method that does not involve paper, especially when testing metal cations. The study compared PSTT using three methods: filter paper, Growth Pouch-TM, and seed tray. The Growth Pouch-TM is a commercially available device. The seed tray is a newly designed plastic receptacle placed inside a Petri dish. The results of the Growth Pouch-TM method showed no toxic effects on rice for Ag up to 40 mg L-1 and Cd up to 20 mg L-1. Using the seed tray method, IC50 (50% inhibitory effect concentration) values were 0.55 and 1.4 mg L-1 for Ag and Cd, respectively. Although results of filter paper and seed tray methods were nearly identical for NaF, Cr(VI), and phenol, the toxicities of cations Ag and Cd were reduced by using the filter paper method; IC50 values were 22 and 18 mg L-1, respectively. The results clearly indicate that paper substrate is not advisable for PSTT.

  8. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images.

    PubMed

    Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan

    2017-12-01

    This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.

  9. Aluminum transfer method for plating plastics

    NASA Technical Reports Server (NTRS)

    Goodrich, W. D.; Stalmach, C. J., Jr.

    1977-01-01

    Electroless plating technique produces plate of uniform thickness. Hardness and abrasion resistance can be increased further by heat treatment. Method results in seamless coating over many materials, has low thermal conductivity, and is relatively inexpensive compared to conventional methods.

  10. Calculation of transonic flows using an extended integral equation method

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1976-01-01

    An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.

  11. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  12. The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter

    NASA Technical Reports Server (NTRS)

    Townsend, Barbara K.

    1987-01-01

    A control-system design method, quadratic optimal cooperative control synthesis (CCS), is applied to the design of a stability and control augmentation system (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design method, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and linear quadratic regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.

  13. Determination and discrimination of biodiesel fuels by gas chromatographic and chemometric methods

    NASA Astrophysics Data System (ADS)

    Milina, R.; Mustafa, Z.; Bojilov, D.; Dagnon, S.; Moskovkina, M.

    2016-03-01

    Pattern recognition method (PRM) was applied to gas chromatographic (GC) data for a fatty acid methyl esters (FAME) composition of commercial and laboratory synthesized biodiesel fuels from vegetable oils including sunflower, rapeseed, corn and palm oils. Two GC quantitative methods to calculate individual fames were compared: Area % and internal standard. The both methods were applied for analysis of two certified reference materials. The statistical processing of the obtained results demonstrates the accuracy and precision of the two methods and allows them to be compared. For further chemometric investigations of biodiesel fuels by their FAME-profiles any of those methods can be used. PRM results of FAME profiles of samples from different vegetable oils show a successful recognition of biodiesels according to the feedstock. The information obtained can be used for selection of feedstock to produce biodiesels with certain properties, for assessing their interchangeability, for fuel spillage and remedial actions in the environment.

  14. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  15. Elastic Differential Cross Sections

    NASA Technical Reports Server (NTRS)

    Werneth, Charles M.; Maung, Khin M.; Ford, William P.; Norbury, John W.; Vera, Michael D.

    2014-01-01

    The eikonal, partial wave (PW) Lippmann-Schwinger, and three-dimensional Lippmann-Schwinger (LS3D) methods are compared for nuclear reactions that are relevant for space radiation applications. Numerical convergence of the eikonal method is readily achieved when exact formulas of the optical potential are used for light nuclei (A less than or equal to 16) and the momentum-space optical potential is used for heavier nuclei. The PW solution method is known to be numerically unstable for systems that require a large number of partial waves, and, as a result, the LS3D method is employed. The effect of relativistic kinematics is studied with the PW and LS3D methods and is compared to eikonal results. It is recommended that the LS3D method be used for high energy nucleon- nucleus reactions and nucleus-nucleus reactions at all energies because of its rapid numerical convergence and stability.

  16. Nuclear Cross Sections for Space Radiation Applications

    NASA Technical Reports Server (NTRS)

    Werneth, C. M.; Maung, K. M.; Ford, W. P.; Norbury, J. W.; Vera, M. D.

    2015-01-01

    The eikonal, partial wave (PW) Lippmann-Schwinger, and three-dimensional Lippmann-Schwinger (LS3D) methods are compared for nuclear reactions that are relevant for space radiation applications. Numerical convergence of the eikonal method is readily achieved when exact formulas of the optical potential are used for light nuclei (A = 16) and the momentum-space optical potential is used for heavier nuclei. The PW solution method is known to be numerically unstable for systems that require a large number of partial waves, and, as a result, the LS3D method is employed. The effect of relativistic kinematics is studied with the PW and LS3D methods and is compared to eikonal results. It is recommended that the LS3D method be used for high energy nucleon-nucleus reactions and nucleus-nucleus reactions at all energies because of its rapid numerical convergence and stability for both non-relativistic and relativistic kinematics.

  17. Comparative homology agreement search: An effective combination of homology-search methods

    PubMed Central

    Alam, Intikhab; Dress, Andreas; Rehmsmeier, Marc; Fuellen, Georg

    2004-01-01

    Many methods have been developed to search for homologous members of a protein family in databases, and the reliability of results and conclusions may be compromised if only one method is used, neglecting the others. Here we introduce a general scheme for combining such methods. Based on this scheme, we implemented a tool called comparative homology agreement search (chase) that integrates different search strategies to obtain a combined “E value.” Our results show that a consensus method integrating distinct strategies easily outperforms any of its component algorithms. More specifically, an evaluation based on the Structural Classification of Proteins database reveals that, on average, a coverage of 47% can be obtained in searches for distantly related homologues (i.e., members of the same superfamily but not the same family, which is a very difficult task), accepting only 10 false positives, whereas the individual methods obtain a coverage of 28–38%. PMID:15367730

  18. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  19. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  20. Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients

    PubMed Central

    Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan

    2016-01-01

    Aim: The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. Material and Methods: The study was conducted at the Be’sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Results: Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Conclusion: Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable. PMID:27703289

  1. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

  2. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods.

    PubMed

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.

  3. Evaluation of dysphagia in early stroke patients by bedside, endoscopic, and electrophysiological methods.

    PubMed

    Umay, Ebru Karaca; Unlu, Ece; Saylam, Guleser Kılıc; Cakci, Aytul; Korkmaz, Hakan

    2013-09-01

    We aimed in this study to evaluate dysphagia in early stroke patients using a bedside screening test and flexible fiberoptic endoscopic evaluation of swallowing (FFEES) and electrophysiological evaluation (EE) methods and to compare the effectiveness of these methods. Twenty-four patients who were hospitalized in our clinic within the first 3 months after stroke were included in this study. Patients were evaluated using a bedside screening test [including bedside dysphagia score (BDS), neurological examination dysphagia score (NEDS), and total dysphagia score (TDS)] and FFEES and EE methods. Patients were divided into normal-swallowing and dysphagia groups according to the results of the evaluation methods. Patients with dysphagia as determined by any of these methods were compared to the patients with normal swallowing based on the results of the other two methods. Based on the results of our study, a high BDS was positively correlated with dysphagia identified by FFEES and EE methods. Moreover, the FFEES and EE methods were positively correlated. There was no significant correlation between NEDS and TDS levels and either EE or FFEES method. Bedside screening tests should be used mainly as an initial screening test; then FFEES and EE methods should be combined in patients who show risks. This diagnostic algorithm may provide a practical and fast solution for selected stroke patients.

  4. Relation between financial market structure and the real economy: comparison between clustering methods.

    PubMed

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  5. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  6. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  7. Comparative study of mobility extraction methods in p-type polycrystalline silicon thin film transistors

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Liu, Yuan; Liu, Yu-Rong; En, Yun-Fei; Li, Bin

    2017-07-01

    Channel mobility in the p-type polycrystalline silicon thin film transistors (poly-Si TFTs) is extracted using Hoffman method, linear region transconductance method and multi-frequency C-V method. Due to the non-negligible errors when neglecting the dependence of gate-source voltage on the effective mobility, the extracted mobility results are overestimated using linear region transconductance method and Hoffman method, especially in the lower gate-source voltage region. By considering of the distribution of localized states in the band-gap, the frequency independent capacitance due to localized charges in the sub-gap states and due to channel free electron charges in the conduction band were extracted using multi-frequency C-V method. Therefore, channel mobility was extracted accurately based on the charge transport theory. In addition, the effect of electrical field dependent mobility degradation was also considered in the higher gate-source voltage region. In the end, the extracted mobility results in the poly-Si TFTs using these three methods are compared and analyzed.

  8. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  9. Relation between Financial Market Structure and the Real Economy: Comparison between Clustering Methods

    PubMed Central

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging. PMID:25786703

  10. Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential

    NASA Technical Reports Server (NTRS)

    Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.

  11. Transient excitation and mechanical admittance test techniques for prediction of payload vibration environments

    NASA Technical Reports Server (NTRS)

    Kana, D. D.; Vargas, L. M.

    1977-01-01

    Transient excitation forces were applied separately to simple beam-and-mass launch vehicle and payload models to develop complex admittance functions for the interface and other appropriate points on the structures. These measured admittances were then analytically combined by a matrix representation to obtain a description of the coupled system dynamic characteristics. Response of the payload model to excitation of the launch vehicle model was predicted and compared with results measured on the combined models. These results are also compared with results of earlier work in which a similar procedure was employed except that steady-state sinusoidal excitation techniques were included. It is found that the method employing transient tests produces results that are better overall than the steady state methods. Furthermore, the transient method requires far less time to implement, and provides far better resolution in the data. However, the data acquisition and handling problem is more complex for this method. It is concluded that the transient test and admittance matrix prediction method can be a valuable tool for development of payload vibration tests.

  12. Assessment on the methods of measuring the tyre-road contact patch stresses

    NASA Astrophysics Data System (ADS)

    Anghelache, G.; Moisescu, A.-R.; Buretea, D.

    2017-08-01

    The paper reviews established and modern methods for investigating tri-axial stress distributions in the tyre-road contact patch. The authors used three methods of measuring stress distributions: strain gauge method; force sensing technique; acceleration measurements. Four prototypes of instrumented pins transducers involving mentioned measuring methods were developed. Data acquisitions of the contact patch stresses distributions were performed using each transducer with instrumented pin. The results are analysed and compared, underlining the advantages and drawbacks of each method. The experimental results indicate that the three methods are valuable.

  13. Equation of motion coupled cluster methods for electron attachment and ionization potential in polyacenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Kowalski, Karol; Jarrell, Mark

    2015-11-05

    Polyacenes have attracted considerable attention due to their use in organic based optoelectronic materials. Polyacenes are polycyclic aromatic hydrocarbons composed of fused benzene rings. Key to understanding and design of new functional materials is an understanding of their excited state properties starting with their electron affinity (EA) and ionization potential (IP). We have developed a highly accurate and com- putationally e*fficient EA/IP equation of motion coupled cluster singles and doubles (EA/IP-EOMCCSD) method that is capable of treating large systems and large basis set. In this study we employ the EA/IP-EOMCCSD method to calculate the electron affinity and ionization potential ofmore » naphthalene, anthracene, tetracene, pentacene, hex- acene and heptacene. We have compared our results with other previous theoretical studies and experimental data. Our EA/IP results are in very good agreement with experiment and when compared with the other theoretical investigations our results represent the most accurate calculations as compared to experiment.« less

  14. Comparing rapid and culture indicator bacteria methods at inland lake beaches

    USGS Publications Warehouse

    Francy, Donna S.; Bushon, Rebecca N.; Brady, Amie M.G.; Kephart, Christopher M.

    2013-01-01

    A rapid method, quantitative polymerase chain reaction (qPCR), for quantifying indicator bacteria in recreational waters is desirable for public health protection. We report that replacing current Escherichia coli standards with new US Environmental Protection Agency beach action values (BAVs) for enterococci by culture or qPCR may result in more advisories being posted at inland recreational lakes. In this study, concentrations of E. coli and enterococci by culture methods were compared to concentrations of Enterococcus spp. by qPCR at 3 inland lake beaches in Ohio. The E. coli and enterococci culture results were significantly related at all beaches; however, the relations between culture results and Enterococcus spp. qPCR results were not always significant and differed among beaches. All the qPCR results exceeded the new BAV for Enterococcus spp. by qPCR, whereas only 23.7% of culture results for E. coli and 79% of culture results for enterococci exceeded the current standard for E. coli or BAV for enterococci.

  15. Antioxidant Capability of Ultra-high Temperature Milk and Ultra-high Temperature Soy Milk and their Fermented Products Determined by Four Distinct Spectrophotometric Methods

    PubMed Central

    Baghbadorani, Sahar Torki; Ehsani, Mohammad Reza; Mirlohi, Maryam; Ezzatpanah, Hamid; Azadbakht, Leila; Babashahi, Mina

    2017-01-01

    Background: Due to the recent emerging information on the antioxidant properties of soy products, substitution of soy milk for milk in the diet has been proposed by some nutritionists. We aimed to compare four distinct antioxidant measuring methods in the evaluation of antioxidant properties of industrial ultra-high temperature (UHT) milk, UHT soy milk, and their fermented products by Lactobacillus plantarum A7. Materials and Methods: Ascorbate auto-oxidation inhibition assay, 2,2-diphenyl-1-picryl-hydrazyl-hydrate (DPPH) free radical scavenging method, hydrogen peroxide neutralization assay and reducing activity test were compared for the homogeneity and accuracy of the results. Results: The results obtained by the four tested methods did not completely match with each other. The results of the DPPH assay and the reducing activity were more coordinated than the other methods. By the use of these methods, the antioxidant capability of UHT soy milk was measured more than UHT milk (33.51 ± 6.00% and 945 ± 56 μM cysteine compared to 8.70 ± 3.20% and 795 ± 82 μM cysteine). The negative effect of fermentation on the antioxidant potential of UHT soy milk was revealed as ascorbate auto-oxidation inhibition assay, DPPH method and reducing activity tests ended to approximately 52%, 58%, and 80% reduction in antioxidant potential of UHT soy milk, respectively. Conclusions: The antioxidative properties of UHT soy milk could not be solely due to its phenolic components. Peptides and amino acids derived from thermal processing in soy milk probably have a main role in its antioxidant activity, which should be studied in the future. PMID:28603703

  16. Determination of low methylmercury concentrations in peat soil samples by isotope dilution GC-ICP-MS using distillation and solvent extraction methods.

    PubMed

    Pietilä, Heidi; Perämäki, Paavo; Piispanen, Juha; Starr, Mike; Nieminen, Tiina; Kantola, Marjatta; Ukonmaanaho, Liisa

    2015-04-01

    Most often, only total mercury concentrations in soil samples are determined in environmental studies. However, the determination of extremely toxic methylmercury (MeHg) in addition to the total mercury is critical to understand the biogeochemistry of mercury in the environment. In this study, N2-assisted distillation and acidic KBr/CuSO4 solvent extraction methods were applied to isolate MeHg from wet peat soil samples collected from boreal forest catchments. Determination of MeHg was performed using a purge and trap GC-ICP-MS technique with a species-specific isotope dilution quantification. Distillation is known to be more prone to artificial MeHg formation compared to solvent extraction which may result in the erroneous MeHg results, especially with samples containing high amounts of inorganic mercury. However, methylation of inorganic mercury during the distillation step had no effect on the reliability of the final MeHg results when natural peat soil samples were distilled. MeHg concentrations determined in peat soil samples after distillation were compared to those determined after the solvent extraction method. MeHg concentrations in peat soil samples varied from 0.8 to 18 μg kg(-1) (dry weight) and the results obtained with the two different methods did not differ significantly (p=0.05). The distillation method with an isotope dilution GC-ICP-MS was shown to be a reliable method for the determination of low MeHg concentrations in unpolluted soil samples. Furthermore, the distillation method is solvent-free and less time-consuming and labor-intensive when compared to the solvent extraction method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Clinical outcomes of arthroscopic single and double row repair in full thickness rotator cuff tears.

    PubMed

    Ji, Jong-Hun; Shafi, Mohamed; Kim, Weon-Yoo; Kim, Young-Yul

    2010-07-01

    There has been a recent interest in the double row repair method for arthroscopic rotator cuff repair following favourable biomechanical results reported by some studies. The purpose of this study was to compare the clinical results of arthroscopic single row and double row repair methods in the full-thickness rotator cuff tears. 22 patients of arthroscopic single row repair (group I) and 25 patients who underwent double row repair (group II) from March 2003 to March 2005 were retrospectively evaluated and compared for the clinical outcomes. The mean age was 58 years and 56 years respectively for group I and II. The average follow-up in the two groups was 24 months. The evaluation was done by using the University of California Los Angeles (UCLA) rating scale and the shoulder index of the American Shoulder and Elbow Surgeons (ASES). In Group I, the mean ASES score increased from 30.48 to 87.40 and the mean ASES score increased from 32.00 to 91.45 in the Group II. The mean UCLA score increased from the preoperative 12.23 to 30.82 in Group I and from 12.20 to 32.40 in Group II. Each method has shown no statistical clinical differences between two methods, but based on the sub scores of UCLA score, the double row repair method yields better results for the strength, and it gives more satisfaction to the patients than the single row repair method. Comparing the two methods, double row repair group showed better clinical results in recovering strength and gave more satisfaction to the patients but no statistical clinical difference was found between 2 methods.

  18. Occurrence of Conotruncal Heart Birth Defects in Texas: A Comparison of Urban/Rural Classifications

    ERIC Educational Resources Information Center

    Langlois, Peter H.; Jandle, Leigh; Scheuerle, Angela; Horel, Scott A.; Carozza, Susan E.

    2010-01-01

    Purpose: (1) Determine if there is an association between 3 conotruncal heart birth defects and urban/rural residence of mother. (2) Compare results using different methods of measuring urban/rural status. Methods: Data were taken from the Texas Birth Defects Registry, 1999-2003. Poisson regression was used to compare crude and adjusted birth…

  19. Hypertext Glosses for Foreign Language Reading Comprehension and Vocabulary Acquisition: Effects of Assessment Methods

    ERIC Educational Resources Information Center

    Chen, I-Jung

    2016-01-01

    This study compared how three different gloss modes affected college students' L2 reading comprehension and vocabulary acquisition. The study also compared how results on comprehension and vocabulary acquisition may differ depending on the four assessment methods used. A between-subjects design was employed with three groups of Mandarin-speaking…

  20. Two methods for proteomic analysis of formalin-fixed, paraffin embedded tissue result in differential protein identification, data quality, and cost.

    PubMed

    Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A

    2015-11-01

    Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  2. Comparison of Video Head Impulse Test (vHIT) Gains Between Two Commercially Available Devices and by Different Gain Analytical Methods.

    PubMed

    Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju

    2018-06-01

    To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.

  3. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  4. A Comparative Analysis of Three Monocular Passive Ranging Methods on Real Infrared Sequences

    NASA Astrophysics Data System (ADS)

    Bondžulić, Boban P.; Mitrović, Srđan T.; Barbarić, Žarko P.; Andrić, Milenko S.

    2013-09-01

    Three monocular passive ranging methods are analyzed and tested on the real infrared sequences. The first method exploits scale changes of an object in successive frames, while other two use Beer-Lambert's Law. Ranging methods are evaluated by comparing with simultaneously obtained reference data at the test site. Research is addressed on scenarios where multiple sensor views or active measurements are not possible. The results show that these methods for range estimation can provide the fidelity required for object tracking. Maximum values of relative distance estimation errors in near-ideal conditions are less than 8%.

  5. Direct imaging of small scatterers using reduced time dependent data

    NASA Astrophysics Data System (ADS)

    Cakoni, Fioralba; Rezac, Jacob D.

    2017-06-01

    We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.

  6. Interactive Visual Least Absolutes Method: Comparison with the Least Squares and the Median Methods

    ERIC Educational Resources Information Center

    Kim, Myung-Hoon; Kim, Michelle S.

    2016-01-01

    A visual regression analysis using the least absolutes method (LAB) was developed, utilizing an interactive approach of visually minimizing the sum of the absolute deviations (SAB) using a bar graph in Excel; the results agree very well with those obtained from nonvisual LAB using a numerical Solver in Excel. These LAB results were compared with…

  7. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review.

    PubMed

    Mathes, Tim; Klaßen, Pauline; Pieper, Dawid

    2017-11-28

    Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  8. Comparison of power curve monitoring methods

    NASA Astrophysics Data System (ADS)

    Cambron, Philippe; Masson, Christian; Tahan, Antoine; Torres, David; Pelletier, Francis

    2017-11-01

    Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM) of wind turbines (WT). In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  9. Comparison of modal analysis results of laser vibrometry and nearfield acoustical holography measurements of an aluminum plate

    NASA Astrophysics Data System (ADS)

    Potter, Jennifer L.

    2011-12-01

    Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green's Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.

  10. Calculating semantic relatedness for biomedical use in a knowledge-poor environment.

    PubMed

    Rybinski, Maciej; Aldana-Montes, José

    2014-01-01

    Computing semantic relatedness between textual labels representing biological and medical concepts is a crucial task in many automated knowledge extraction and processing applications relevant to the biomedical domain, specifically due to the huge amount of new findings being published each year. Most methods benefit from making use of highly specific resources, thus reducing their usability in many real world scenarios that differ from the original assumptions. In this paper we present a simple resource-efficient method for calculating semantic relatedness in a knowledge-poor environment. The method obtains results comparable to state-of-the-art methods, while being more generic and flexible. The solution being presented here was designed to use only a relatively generic and small document corpus and its statistics, without referring to a previously defined knowledge base, thus it does not assume a 'closed' problem. We propose a method in which computation for two input texts is based on the idea of comparing the vocabulary associated with the best-fit documents related to those texts. As keyterm extraction is a costly process, it is done in a preprocessing step on a 'per-document' basis in order to limit the on-line processing. The actual computations are executed in a compact vector space, limited by the most informative extraction results. The method has been evaluated on five direct benchmarks by calculating correlation coefficients w.r.t. average human answers. It also has been used on Gene - Disease and Disease- Disease data pairs to highlight its potential use as a data analysis tool. Apart from comparisons with reported results, some interesting features of the method have been studied, i.e. the relationship between result quality, efficiency and applicable trimming threshold for size reduction. Experimental evaluation shows that the presented method obtains results that are comparable with current state of the art methods, even surpassing them on a majority of the benchmarks. Additionally, a possible usage scenario for the method is showcased with a real-world data experiment. Our method improves flexibility of the existing methods without a notable loss of quality. It is a legitimate alternative to the costly construction of specialized knowledge-rich resources.

  11. Comparative analysis of methods and sources of financing of the transport organizations activity

    NASA Astrophysics Data System (ADS)

    Gorshkov, Roman

    2017-10-01

    The article considers the analysis of methods of financing of transport organizations in conditions of limited investment resources. A comparative analysis of these methods is carried out, the classification of investment, methods and sources of financial support for projects being implemented to date are presented. In order to select the optimal sources of financing for the projects, various methods of financial management and financial support for the activities of the transport organization were analyzed, which were considered from the perspective of analysis of advantages and limitations. The result of the study is recommendations on the selection of optimal sources and methods of financing of transport organizations.

  12. A Study of Impact Point Detecting Method Based on Seismic Signal

    NASA Astrophysics Data System (ADS)

    Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong

    The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.

  13. U.S. Geological Survey experience with the residual absolutes method

    USGS Publications Warehouse

    Worthington, E. William; Matzka, Jurgen

    2017-01-01

    The U.S. Geological Survey (USGS) Geomagnetism Program has developed and tested the residual method of absolutes, with the assistance of the Danish Technical University's (DTU) Geomagnetism Program. Three years of testing were performed at College Magnetic Observatory (CMO), Fairbanks, Alaska, to compare the residual method with the null method. Results show that the two methods compare very well with each other and both sets of baseline data were used to process the 2015 definitive data. The residual method will be implemented at the other USGS high-latitude geomagnetic observatories in the summer of 2017 and 2018.

  14. COMPARISON OF METHODS FOR MEASURING CONCENTRATIONS OF SEMIVOLATILE PARTICULATE MATTER

    EPA Science Inventory

    The paper gives results of a comparison of methods for measuring concentrations of semivolatile particulate matter (PM) from indoor-environment, small, combustion sources. Particle concentration measurements were compared for methods using filters and a small electrostatic precip...

  15. Kentucky highway rating system

    DOT National Transportation Integrated Search

    2003-03-01

    This study had two goals: 1. Formulate a new method for generating roadway adequacy ratings; 2. Construct an appropriate data set and then test the method by comparing it to the results of the HPMS-AP method. The recommended methodology builds on the...

  16. A new clocking method for a charge coupled device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umezu, Rika; Kitamoto, Shunji, E-mail: kitamoto@rikkyo.ac.jp; Murakami, Hiroshi

    2014-07-15

    We propose and demonstrate a new clocking method for a charge-coupled device (CCD). When a CCD is used for a photon counting detector of X-rays, its weak point is a limitation of its counting rate, because high counting rate makes non-negligible pile-up of photons. In astronomical usage, this pile-up is especially severe for an observation of a bright point-like object. One typical idea to reduce the pile-up is a parallel sum (P-sum) mode. This mode completely loses one-dimensional information. Our new clocking method, panning mode, provides complementary properties between the normal mode and the P-sum mode. We performed a simplemore » simulation in order to investigate a pile-up probability and compared the simulated result and actual obtained event rates. Using this simulation and the experimental results, we compared the pile-up tolerance of various clocking modes including our new method and also compared their other characteristics.« less

  17. Pressure measurements in a low-density nozzle plume for code verification

    NASA Technical Reports Server (NTRS)

    Penko, Paul F.; Boyd, Iain D.; Meissner, Dana L.; Dewitt, Kenneth J.

    1991-01-01

    Measurements of Pitot pressure were made in the exit plane and plume of a low-density, nitrogen nozzle flow. Two numerical computer codes were used to analyze the flow, including one based on continuum theory using the explicit MacCormack method, and the other on kinetic theory using the method of direct-simulation Monte Carlo (DSMC). The continuum analysis was carried to the nozzle exit plane and the results were compared to the measurements. The DSMC analysis was extended into the plume of the nozzle flow and the results were compared with measurements at the exit plane and axial stations 12, 24 and 36 mm into the near-field plume. Two experimental apparatus were used that differed in design and gave slightly different profiles of pressure measurements. The DSMC method compared well with the measurements from each apparatus at all axial stations and provided a more accurate prediction of the flow than the continuum method, verifying the validity of DSMC for such calculations.

  18. Comparison of risk assessment procedures used in OCRA and ULRA methods

    PubMed Central

    Roman-Liu, Danuta; Groborz, Anna; Tokarski, Tomasz

    2013-01-01

    The aim of this study was to analyse the convergence of two methods by comparing exposure and the assessed risk of developing musculoskeletal disorders at 18 repetitive task workstations. The already established occupational repetitive actions (OCRA) and the recently developed upper limb risk assessment (ULRA) produce correlated results (R = 0.84, p = 0.0001). A discussion of the factors that influence the values of the OCRA index and ULRA's repetitive task indicator shows that both similarities and differences in the results produced by the two methods can arise from the concepts that underlie them. The assessment procedure and mathematical calculations that the basic parameters are subjected to are crucial to the results of risk assessment. The way the basic parameters are defined influences the assessment of exposure and risk assessment to a lesser degree. The analysis also proved that not always do great differences in load indicator values result in differences in risk zones. Practitioner Summary: We focused on comparing methods that, even though based on different concepts, serve the same purpose. The results proved that different methods with different assumptions can produce similar assessment of upper limb load; sharp criteria in risk assessment are not the best solution. PMID:24041375

  19. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  20. Loop shaping design for tracking performance in machine axes.

    PubMed

    Schinstock, Dale E; Wei, Zhouhong; Yang, Tao

    2006-01-01

    A modern interpretation of classical loop shaping control design methods is presented in the context of tracking control for linear motor stages. Target applications include noncontacting machines such as laser cutters and markers, water jet cutters, and adhesive applicators. The methods are directly applicable to the common PID controller and are pertinent to many electromechanical servo actuators other than linear motors. In addition to explicit design techniques a PID tuning algorithm stressing the importance of tracking is described. While the theory behind these techniques is not new, the analysis of their application to modern systems is unique in the research literature. The techniques and results should be important to control practitioners optimizing PID controller designs for tracking and in comparing results from classical designs to modern techniques. The methods stress high-gain controller design and interpret what this means for PID. Nothing in the methods presented precludes the addition of feedforward control methods for added improvements in tracking. Laboratory results from a linear motor stage demonstrate that with large open-loop gain very good tracking performance can be achieved. The resultant tracking errors compare very favorably to results from similar motions on similar systems that utilize much more complicated controllers.

  1. Comparative analysis of different survey methods for monitoring fish assemblages in coastal habitats.

    PubMed

    Baker, Duncan G L; Eddy, Tyler D; McIver, Reba; Schmidt, Allison L; Thériault, Marie-Hélène; Boudreau, Monica; Courtenay, Simon C; Lotze, Heike K

    2016-01-01

    Coastal ecosystems are among the most productive yet increasingly threatened marine ecosystems worldwide. Particularly vegetated habitats, such as eelgrass (Zostera marina) beds, play important roles in providing key spawning, nursery and foraging habitats for a wide range of fauna. To properly assess changes in coastal ecosystems and manage these critical habitats, it is essential to develop sound monitoring programs for foundation species and associated assemblages. Several survey methods exist, thus understanding how different methods perform is important for survey selection. We compared two common methods for surveying macrofaunal assemblages: beach seine netting and underwater visual census (UVC). We also tested whether assemblages in shallow nearshore habitats commonly sampled by beach seines are similar to those of nearby eelgrass beds often sampled by UVC. Among five estuaries along the Southern Gulf of St. Lawrence, Canada, our results suggest that the two survey methods yield comparable results for species richness, diversity and evenness, yet beach seines yield significantly higher abundance and different species composition. However, sampling nearshore assemblages does not represent those in eelgrass beds despite considerable overlap and close proximity. These results have important implications for how and where macrofaunal assemblages are monitored in coastal ecosystems. Ideally, multiple survey methods and locations should be combined to complement each other in assessing the entire assemblage and full range of changes in coastal ecosystems, thereby better informing coastal zone management.

  2. Statistical Validation of Automatic Methods for Hippocampus Segmentation in MR Images of Epileptic Patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad R.; Pompili, Dario; Soltanian-Zadeh, Hamid

    2015-01-01

    Hippocampus segmentation is a key step in the evaluation of mesial Temporal Lobe Epilepsy (mTLE) by MR images. Several automated segmentation methods have been introduced for medical image segmentation. Because of multiple edges, missing boundaries, and shape changing along its longitudinal axis, manual outlining still remains the benchmark for hippocampus segmentation, which however, is impractical for large datasets due to time constraints. In this study, four automatic methods, namely FreeSurfer, Hammer, Automatic Brain Structure Segmentation (ABSS), and LocalInfo segmentation, are evaluated to find the most accurate and applicable method that resembles the bench-mark of hippocampus. Results from these four methods are compared against those obtained using manual segmentation for T1-weighted images of 157 symptomatic mTLE patients. For performance evaluation of automatic segmentation, Dice coefficient, Hausdorff distance, Precision, and Root Mean Square (RMS) distance are extracted and compared. Among these four automated methods, ABSS generates the most accurate results and the reproducibility is more similar to expert manual outlining by statistical validation. By considering p-value<0.05, the results of performance measurement for ABSS reveal that, Dice is 4%, 13%, and 17% higher, Hausdorff is 23%, 87%, and 70% lower, precision is 5%, -5%, and 12% higher, and RMS is 19%, 62%, and 65% lower compared to LocalInfo, FreeSurfer, and Hammer, respectively. PMID:25571043

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, L.; Zaslawsky, M.

    Two approaches for calculating soil structure interaction (SSI) are compared: finite element and lumped mass. Results indicate that the calculations with the lumped mass method are generally conservative compared to those obtained by the finite element method. They also suggest that a closer agreement between the two sets of calculations is possible, depending on the use of frequency-dependent soil springs and dashpots in the lumped mass calculations. There is a total lack of suitable guidelines for implementing the lumped mass method of calculating SSI, which leads to the conclusion that the finite element method is generally superior for calculative purposes.

  4. Surgical treatment of chronic pancreatitis and its complications. Comparative analysis of results in 91 patients.

    PubMed

    Marinov, V; Draganov, K; Gaydarski, R; Katev, N N

    2013-01-01

    There is a large variety of proposed conservative, invasive, endoscopic and surgical methods for treatment of chronic pancreatitis and its complications. This study presents a comparative analysis of the results from each group of patients subjected to drainage, resection, denervation and other operative techniques for a total of 91 patients with chronic pancreatitis and its complications. Drainage and resection operative techniques yield comparable results in terms of postoperative pain control 93.1% and 100%, perioperative mortality--3.17% and 5.8%, perioperative morbidity--7.9% and 11.7%, respectively. There is a significant increase in the instances of diabetes in the resection group. Right-side semilunar ganglionectomy is a good method for pain control as an accompanying procedure in the course of another main operative technique.

  5. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  6. Graphical method for comparative statistical study of vaccine potency tests.

    PubMed

    Pay, T W; Hingley, P J

    1984-03-01

    Producers and consumers are interested in some of the intrinsic characteristics of vaccine potency assays for the comparative evaluation of suitable experimental design. A graphical method is developed which represents the precision of test results, the sensitivity of such results to changes in dosage, and the relevance of the results in the way they reflect the protection afforded in the host species. The graphs can be constructed from Producer's scores and Consumer's scores on each of the scales of test score, antigen dose and probability of protection against disease. A method for calculating these scores is suggested and illustrated for single and multiple component vaccines, for tests which do or do not employ a standard reference preparation, and for tests which employ quantitative or quantal systems of scoring.

  7. An information hiding method based on LSB and tent chaotic map

    NASA Astrophysics Data System (ADS)

    Song, Jianhua; Ding, Qun

    2011-06-01

    In order to protect information security more effectively, a novel information hiding method based on LSB and Tent chaotic map was proposed, first the secret message is Tent chaotic encrypted, and then LSB steganography is executed for the encrypted message in the cover-image. Compared to the traditional image information hiding method, the simulation results indicate that the method greatly improved in imperceptibility and security, and acquired good results.

  8. Laser ultrasonics for measurements of high-temperature elastic properties and internal temperature distribution

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro

    2001-06-01

    We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.

  9. Cost-effectiveness of peer role play and standardized patients in undergraduate communication training.

    PubMed

    Bosse, Hans Martin; Nickel, Martin; Huwendiek, Sören; Schultz, Jobst Hendrik; Nikendei, Christoph

    2015-10-24

    The few studies directly comparing the methodological approach of peer role play (RP) and standardized patients (SP) for the delivery of communication skills all suggest that both methods are effective. In this study we calculated the costs of both methods (given comparable outcomes) and are the first to generate a differential cost-effectiveness analysis of both methods. Medical students in their prefinal year were randomly assigned to one of two groups receiving communication training in Pediatrics either with RP (N = 34) or 19 individually trained SP (N = 35). In an OSCE with standardized patients using the Calgary-Cambridge Referenced Observation Guide both groups achieved comparable high scores (results published). In this study, corresponding costs were assessed as man-hours resulting from hours of work of SP and tutors. A cost-effectiveness analysis was performed. Cost-effectiveness analysis revealed a major advantage for RP as compared to SP (112 vs. 172 man hours; cost effectiveness ratio .74 vs. .45) at comparable performance levels after training with both methods. While both peer role play and training with standardized patients have their value in medical curricula, RP has a major advantage in terms of cost-effectiveness. This could be taken into account in future decisions.

  10. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  11. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  12. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  13. Comparison of Vitek Matrix-assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry Versus Conventional Methods in Candida Identification.

    PubMed

    Keçeli, Sema Aşkın; Dündar, Devrim; Tamer, Gülden Sönmez

    2016-02-01

    Candida species are generally identified by conventional methods such as germ tube or morphological appearance on corn meal agar, biochemical methods using API kits and molecular biological methods. Alternative to these methods, rapid and accurate identification methods of microorganisms called matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDİ-TOF MS) has recently been described. In this study, Candida identification results by API Candida kit, API 20C AUX kit and identifications on corn meal agar (CMA) are compared with the results obtained on Vitek-MS. All results were confirmed by sequencing internal transcribed spacer (ITS) regions of rDNA. Totally, 97 Candida strains were identified by germ tube test, CMA, API and Vitek-MS. Vitek-MS results were compatible with 74.2 % of API 20C AUX and 81.4 % of CMA results. The difference between the results of API Candida and API 20C AUX was detected. The ratio of discrepancy between Vitek-MS and API 20C AUX was 25.8 %. Candida species mostly identified as C. famata or C. tropicalis by and not compatible with API kits were identified as C. albicans by Vitek-MS. Sixteen Candida species having discrepant results with Vitek-MS, API or CMA were randomly chosen, and ITS sequence analysis was performed. The results of sequencing were compatible 56.2 % with API 20C AUX, 50 % with CMA and 93.7 % with Vitek-MS. When compared with conventional identification methods, MS results are more reliable and rapid for Candida identification. MS system may be used as routine identification method in clinical microbiology laboratories.

  14. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  15. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  16. A comparison of treatment effectiveness between the CAD/CAM method and the manual method for managing adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Lo, K H

    2005-04-01

    The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.

  17. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images.

    PubMed

    Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli

    2010-05-13

    Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.

  18. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  19. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    NASA Astrophysics Data System (ADS)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  20. Comparability among four invertebrate sampling methods, Fountain Creek Basin, Colorado, 2010-2012

    USGS Publications Warehouse

    Zuellig, Robert E.; Bruce, James F.; Stogner, Sr., Robert W.; Brown, Krystal D.

    2014-01-01

    The U.S. Geological Survey, in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, designed a study to determine if sampling method and sample timing resulted in comparable samples and assessments of biological condition. To accomplish this task, annual invertebrate samples were collected concurrently using four sampling methods at 15 U.S. Geological Survey streamflow gages in the Fountain Creek basin from 2010 to 2012. Collectively, the four methods are used by local (U.S. Geological Survey cooperative monitoring program) and State monitoring programs (Colorado Department of Public Health and Environment) in the Fountain Creek basin to produce two distinct sample types for each program that target single-and multiple-habitats. This study found distinguishable differences between single-and multi-habitat sample types using both community similarities and multi-metric index values, while methods from each program within sample type were comparable. This indicates that the Colorado Department of Public Health and Environment methods were compatible with the cooperative monitoring program methods within multi-and single-habitat sample types. Comparisons between September and October samples found distinguishable differences based on community similarities for both sample types, whereas only differences were found for single-habitat samples when multi-metric index values were considered. At one site, differences between September and October index values from single-habitat samples resulted in opposing assessments of biological condition. Direct application of the results to inform the revision of the existing Fountain Creek basin U.S. Geological Survey cooperative monitoring program are discussed.

  1. The impact of change in albumin assay on reference intervals, prevalence of 'hypoalbuminaemia' and albumin prescriptions.

    PubMed

    Coley-Grant, Deon; Herbert, Mike; Cornes, Michael P; Barlow, Ian M; Ford, Clare; Gama, Rousseau

    2016-01-01

    We studied the impact on reference intervals, classification of patients with hypoalbuminaemia and albumin infusion prescriptions on changing from a bromocresol green (BCG) to a bromocresol purple (BCP) serum albumin assay. Passing-Bablok regression analysis and Bland-Altman plot were used to compare Abbott BCP and Roche BCG methods. Linear regression analysis was used to compare in-house and an external laboratory Abbott BCP serum albumin results. Reference intervals for Abbott BCP serum albumin were derived in two different laboratories using pathology data from adult patients in primary care. Prescriptions for 20% albumin infusions were compared one year before and one year after changing the albumin method. Abbott BCP assay had a negative bias of approximately 6 g/L compared with Roche BCG method.There was good agreement (y = 1.04 x - 1.03; R(2 )= 0.9933) between in-house and external laboratory Abbott BCP results. Reference intervals for the serum albumin Abbott BCP assay were 31-45 g/L, different to those recommended by Pathology Harmony and the manufacturers (35-50 g/L). Following the change in method there was a large increase in the number of patients classified as hypoalbuminaemic using Pathology Harmony references intervals (32%) but not when retrospectively compared to locally derived reference intervals (16%) compared with the previous year (12%). The method change was associated with a 44.6% increase in albumin prescriptions. This equated to an annual increase in expenditure of £35,234. We suggest that serum albumin reference intervals be method specific to prevent misclassification of albumin status in patients. Change in albumin methodology may have significant impact on hospital resources. © The Author(s) 2015.

  2. Application effectiveness of the microtremor survey method in the exploration of geothermal resources

    NASA Astrophysics Data System (ADS)

    Tian, Baoqing; Xu, Peifen; Ling, Suqun; Du, Jianguo; Xu, Xueqiu; Pang, Zhonghe

    2017-10-01

    Geophysical techniques are critical tools of geothermal resource surveys. In recent years, the microtremor survey method, which has two branch techniques (the microtremor sounding technique and the two-dimensional (2D) microtremor profiling technique), has become a common method for geothermal resource exploration. The results of microtremor surveys provide important deep information for probing structures of geothermal storing basins and researching the heat-controlling structures, as well as providing the basis for drilling positions of geothermal wells. In this paper, the southern Jiangsu geothermal resources area is taken as a study example. By comparing the results of microtremor surveys and drilling conclusions, and analyzing microtremor survey effectiveness, and geological and technical factors such as observation radius and sampling frequency, we study the applicability of the microtremor survey method and the optimal way of working with this method to achieve better detection results. A comparative study of survey results and geothermal drilling results shows that the microtremor sounding technique effectively distinguishes sub-layers and determines the depth of geothermal reservoirs in the area with excellent layer conditions. The error of depth is generally no more than 8% compared with the results of drilling. It detects deeper by adjusting the size of the probing radius. The 2D microtremor profiling technique probes exactly the buried structures which display as low velocity anomalies in the apparent velocity profile of the S-wave. The anomaly is the critical symbol of the 2D microtremor profiling technique to distinguish and explain the buried geothermal structures. 2D microtremor profiling results provide an important basis for locating exactly the geothermal well and reducing the risk of drilling dry wells.

  3. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  5. Macro elemental analysis of food samples by nuclear analytical technique

    NASA Astrophysics Data System (ADS)

    Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.

    2017-06-01

    Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.

  6. Heats of Segregation of BCC Binaries from ab Initio and Quantum Approximate Calculations

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2004-01-01

    We compare dilute-limit heats of segregation for selected BCC transition metal binaries computed using ab initio and quantum approximate energy methods. Ab initio calculations are carried out using the CASTEP plane-wave pseudopotential computer code, while quantum approximate results are computed using the Bozzolo-Ferrante-Smith (BFS) method with the most recent LMTO-based parameters. Quantum approximate segregation energies are computed with and without atomistic relaxation, while the ab initio calculations are performed without relaxation. Results are discussed within the context of a segregation model driven by strain and bond-breaking effects. We compare our results with full-potential quantum calculations and with available experimental results.

  7. Validation for Vegetation Green-up Date Extracted from GIMMS NDVI and NDVI3g Using Variety of Methods

    NASA Astrophysics Data System (ADS)

    Chang, Q.; Jiao, W.

    2017-12-01

    Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.

  8. The comparative analysis of the current-meter method and the pressure-time method used for discharge measurements in the Kaplan turbine penstocks

    NASA Astrophysics Data System (ADS)

    Adamkowski, A.; Krzemianowski, Z.

    2012-11-01

    The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.

  9. Shining a light on LAMP assays--a comparison of LAMP visualization methods including the novel use of berberine.

    PubMed

    Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix

    2015-04-01

    The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.

  10. Lateral Load Capacity of Piles: A Comparative Study Between Indian Standards and Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Jayasree, P. K.; Arun, K. V.; Oormila, R.; Sreelakshmi, H.

    2018-05-01

    As per Indian Standards, laterally loaded piles are usually analysed using the method adopted by IS 2911-2010 (Part 1/Section 2). But the practising engineers are of the opinion that the IS method is very conservative in design. This work aims at determining the extent to which the conventional IS design approach is conservative. This is done through a comparative study between IS approach and the theoretical model based on Vesic's equation. Bore log details for six different bridges were collected from the Kerala Public Works Department. Cast in situ fixed head piles embedded in three soil conditions both end bearing as well as friction piles were considered and analyzed separately. Piles were also modelled in STAAD.Pro software based on IS approach and the results were validated using Matlock and Reese (In Proceedings of fifth international conference on soil mechanics and foundation engineering, 1961) equation. The results were presented as the percentage variation in values of bending moment and deflection obtained by different methods. The results obtained from the mathematical model based on Vesic's equation and that obtained as per the IS approach were compared and the IS method was found to be uneconomical and conservative.

  11. Percent body fat estimations in college men using field and laboratory methods: A three-compartment model approach

    PubMed Central

    Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R

    2008-01-01

    Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Methods Thirty-one Caucasian men (22.5 ± 2.7 yrs; 175.6 ± 6.3 cm; 76.4 ± 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex® 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Conclusion Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population. PMID:18426582

  12. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: Derivative spectrophotometry versus wavelet transform

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Lotfy, Hayam M.; Hassan, Nagiba Y.; El-Zeiny, Mohamed B.; Saleh, Sarah S.

    2015-01-01

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  13. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: derivative spectrophotometry versus wavelet transform.

    PubMed

    Salem, Hesham; Lotfy, Hayam M; Hassan, Nagiba Y; El-Zeiny, Mohamed B; Saleh, Sarah S

    2015-01-25

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Ink dating using thermal desorption and gas chromatography/mass spectrometry: comparison of results obtained in two laboratories.

    PubMed

    Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline

    2015-01-01

    An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.

  15. Characterization of fatigue properties of binders and mastics at intermediate temperatures using dynamic shear rheometer.

    DOT National Transportation Integrated Search

    2013-10-01

    The paper compares the fatigue life of neat and modified PAV-aged binders and mastics and : determines the influence of dust on fatigue life using the Linear Amplitude Sweep (LAS) method. It : will also compare these results with results from the DER...

  16. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  17. Comparing capacity value estimation techniques for photovoltaic solar power

    DOE PAGES

    Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul

    2012-09-28

    In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less

  18. Motion magnification using the Hermite transform

    NASA Astrophysics Data System (ADS)

    Brieva, Jorge; Moya-Albor, Ernesto; Gomez-Coronel, Sandra L.; Escalante-Ramírez, Boris; Ponce, Hiram; Mora Esquivel, Juan I.

    2015-12-01

    We present an Eulerian motion magnification technique with a spatial decomposition based on the Hermite Transform (HT). We compare our results to the approach presented in.1 We test our method in one sequence of the breathing of a newborn baby and on an MRI left ventricle sequence. Methods are compared using quantitative and qualitative metrics after the application of the motion magnification algorithm.

  19. Automatic Keyword Identification by Artificial Neural Networks Compared to Manual Identification by Users of Filtering Systems.

    ERIC Educational Resources Information Center

    Boger, Zvi; Kuflik, Tsvi; Shoval, Peretz; Shapira, Bracha

    2001-01-01

    Discussion of information filtering (IF) and information retrieval focuses on the use of an artificial neural network (ANN) as an alternative method for both IF and term selection and compares its effectiveness to that of traditional methods. Results show that the ANN relevance prediction out-performs the prediction of an IF system. (Author/LRW)

  20. Near-surface shear-wave velocity measurements in unlithified sediment

    USGS Publications Warehouse

    Richards, B.T.; Steeples, D.; Miller, R.; Ivanov, J.; Peterie, S.; Sloan, S.D.; McKenna, J.R.

    2011-01-01

    S-wave velocity can be directly correlated to material stiffness and lithology making it a valuable physical property that has found uses in construction, engineering, and environmental projects. This study compares different methods for measuring S-wave velocities, investigating and identifying the differences among the methods' results, and prioritizing the different methods for optimal S-wave use at the U. S. Army's Yuma Proving Grounds YPG. Multichannel Analysis of Surface Waves MASW and S-wave tomography were used to generate S-wave velocity profiles. Each method has advantages and disadvantages. A strong signal-to-noise ratio at the study site gives the MASW method promising resolution. S-wave first arrivals are picked on impulsive sledgehammer data which were then used for the tomography process. Three-component downhole seismic data were collected in-line with a locking geophone, providing ground truth to compare the data and to draw conclusions about the validity of each data set. Results from these S-wave measurement techniques are compared with borehole seismic data and with lithology data from continuous samples to help ascertain the accuracy, and therefore applicability, of each method. This study helps to select the best methods for obtaining S-wave velocities for media much like those found in unconsolidated sediments at YPG. ?? 2011 Society of Exploration Geophysicists.

  1. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  2. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  3. Comparison of Dam Breach Parameter Estimators

    DTIC Science & Technology

    2008-01-01

    of the methods, when used in the HEC - RAS simulation model , produced comparable results. The methods tested suggest use of ...characteristics of a dam breach, use of those parameters within the unsteady flow routing model HEC - RAS , and the computation and display of the resulting...implementation of these breach parameters in

  4. Comparative evaluation of the activity of commercial biocides in relation to micromycetes

    NASA Astrophysics Data System (ADS)

    Strokova, Valeria; Nelyubova, Victoria; Vasilenko, Marina; Goncharova, Elena; Rykunova, Marina; Kalatozi, Elina

    2017-11-01

    The paper presents the results of comparative studies of commercial biocides of Russian production, used in the technology of biostable materials for construction purposes, to micromycetes of various species and type diversity. The influence of the type of biocide additive on their fungicidal activity was established. The fungicidal effect of bioactive agents was evaluated using a disc-diffusion method. The analysis of the results is carried out both using the traditional approach and a modified method using a scoring of the degree of impact.

  5. Evaluation of Lysis Methods for the Extraction of Bacterial DNA for Analysis of the Vaginal Microbiota

    PubMed Central

    Gill, Christina; Blow, Frances; Darby, Alistair C.

    2016-01-01

    Background Recent studies on the vaginal microbiota have employed molecular techniques such as 16S rRNA gene sequencing to describe the bacterial community as a whole. These techniques require the lysis of bacterial cells to release DNA before purification and PCR amplification of the 16S rRNA gene. Currently, methods for the lysis of bacterial cells are not standardised and there is potential for introducing bias into the results if some bacterial species are lysed less efficiently than others. This study aimed to compare the results of vaginal microbiota profiling using four different pretreatment methods for the lysis of bacterial samples (30 min of lysis with lysozyme, 16 hours of lysis with lysozyme, 60 min of lysis with a mixture of lysozyme, mutanolysin and lysostaphin and 30 min of lysis with lysozyme followed by bead beating) prior to chemical and enzyme-based DNA extraction with a commercial kit. Results After extraction, DNA yield did not significantly differ between methods with the exception of lysis with lysozyme combined with bead beating which produced significantly lower yields when compared to lysis with the enzyme cocktail or 30 min lysis with lysozyme only. However, this did not result in a statistically significant difference in the observed alpha diversity of samples. The beta diversity (Bray-Curtis dissimilarity) between different lysis methods was statistically significantly different, but this difference was small compared to differences between samples, and did not affect the grouping of samples with similar vaginal bacterial community structure by hierarchical clustering. Conclusions An understanding of how laboratory methods affect the results of microbiota studies is vital in order to accurately interpret the results and make valid comparisons between studies. Our results indicate that the choice of lysis method does not prevent the detection of effects relating to the type of vaginal bacterial community one of the main outcome measures of epidemiological studies. However, we recommend that the same method is used on all samples within a particular study. PMID:27643503

  6. Semiquantitative visual approach to scoring lung cancer treatment response using computed tomography: a pilot study.

    PubMed

    Gottlieb, Ronald H; Kumar, Prasanna; Loud, Peter; Klippenstein, Donald; Raczyk, Cheryl; Tan, Wei; Lu, Jenny; Ramnath, Nithya

    2009-01-01

    Our objective was to compare a newly developed semiquantitative visual scoring (SVS) method with the current standard, the Response Evaluation Criteria in Solid Tumors (RECIST) method, in the categorization of treatment response and reader agreement for patients with metastatic lung cancer followed by computed tomography. The 18 subjects (5 women and 13 men; mean age, 62.8 years) were from an institutional review board-approved phase 2 study that evaluated a second-line chemotherapy regimen for metastatic (stages III and IV) non-small cell lung cancer. Four radiologists, blinded to the patient outcome and each other's reads, evaluated the change in the patients' tumor burden from the baseline to the first restaging computed tomographic scan using either the RECIST or the SVS method. We compared the numbers of patients placed into the partial response, the stable disease (SD), and the progressive disease (PD) categories (Fisher exact test) and observer agreement (kappa statistic). Requiring the concordance of 3 of the 4 readers resulted in the RECIST placing 17 (100%) of 17 patients in the SD category compared with the SVS placing 9 (60%) of 15 patients in the partial response, 5 (33%) of the 15 patients in the SD, and 1 (6.7%) of the 15 patients in the PD categories (P < 0.0001). Interobserver agreement was higher among the readers using the SVS method (kappa, 0.54; P < 0.0001) compared with that of the readers using the RECIST method (kappa, -0.01; P = 0.5378). Using the SVS method, the readers more finely discriminated between the patient response categories with superior agreement compared with the RECIST method, which could potentially result in large differences in early treatment decisions for advanced lung cancer.

  7. Rapid method for direct identification of bacteria in urine and blood culture samples by matrix-assisted laser desorption ionization time-of-flight mass spectrometry: intact cell vs. extraction method.

    PubMed

    Ferreira, L; Sánchez-Juanes, F; Muñoz-Bellido, J L; González-Buitrago, J M

    2011-07-01

    Matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) is a fast and reliable technology for the identification of microorganisms with proteomics approaches. Here, we compare an intact cell method and a protein extraction method before application on the MALDI plate for the direct identification of microorganisms in both urine and blood culture samples from clinical microbiology laboratories. The results show that the intact cell method provides excellent results for urine and is a good initial method for blood cultures. The extraction method complements the intact cell method, improving microorganism identification from blood culture. Thus, we consider that MALDI-TOF MS performed directly on urine and blood culture samples, with the protocols that we propose, is a suitable technique for microorganism identification, as compared with the routine methods used in the clinical microbiology laboratory. © 2010 The Authors. Clinical Microbiology and Infection © 2010 European Society of Clinical Microbiology and Infectious Diseases.

  8. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  9. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    PubMed

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The practical approach to the evaluation of methods used to determine the disintegration time of orally disintegrating tablets (ODTs).

    PubMed

    Brniak, Witold; Jachowicz, Renata; Pelka, Przemyslaw

    2015-09-01

    Even that orodispersible tablets (ODTs) have been successfully used in therapy for more than 20 years, there is still no compendial method of their disintegration time evaluation other than the pharmacopoeial disintegration test conducted in 800-900 mL of distilled water. Therefore, several alternative tests more relevant to in vivo conditions were described by different researchers. The aim of this study was to compare these methods and correlate them with in vivo results. Six series of ODTs were prepared by direct compression. Their mechanical properties and disintegration times were measured with pharmacopoeial and alternative methods and compared with the in vivo results. The highest correlation with oral disintegration time was found in the case of own-construction apparatus with additional weight and the employment of the method proposed by Narazaki et al. The correlation coefficients were 0.9994 (p < 0.001), and 0.9907 (p < 0.001) respectively. The pharmacopoeial method correlated with the in vivo data much worse (r = 0.8925, p < 0.05). These results have shown that development of novel biorelevant methods of ODT's disintegration time determination is eligible and scientifically justified.

  11. Optimisation and validation of a rapid and efficient microemulsion liquid chromatographic (MELC) method for the determination of paracetamol (acetaminophen) content in a suppository formulation.

    PubMed

    McEvoy, Eamon; Donegan, Sheila; Power, Joe; Altria, Kevin

    2007-05-09

    A rapid and efficient oil-in-water microemulsion liquid chromatographic method has been optimised and validated for the analysis of paracetamol in a suppository formulation. Excellent linearity, accuracy, precision and assay results were obtained. Lengthy sample pre-treatment/extraction procedures were eliminated due to the solubilising power of the microemulsion and rapid analysis times were achieved. The method was optimised to achieve rapid analysis time and relatively high peak efficiencies. A standard microemulsion composition of 33 g SDS, 66 g butan-1-ol, 8 g n-octane in 1l of 0.05% TFA modified with acetonitrile has been shown to be suitable for the rapid analysis of paracetamol in highly hydrophobic preparations under isocratic conditions. Validated assay results and overall analysis time of the optimised method was compared to British Pharmacopoeia reference methods. Sample preparation and analysis times for the MELC analysis of paracetamol in a suppository were extremely rapid compared to the reference method and similar assay results were achieved. A gradient MELC method using the same microemulsion has been optimised for the resolution of paracetamol and five of its related substances in approximately 7 min.

  12. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  13. The application of quadratic optimal cooperative control synthesis to a CH-47 helicopter

    NASA Technical Reports Server (NTRS)

    Townsend, Barbara K.

    1986-01-01

    A control-system design method, Quadratic Optimal Cooperative Control Synthesis (CCS), is applied to the design of a Stability and Control Augmentation Systems (SCAS). The CCS design method is different from other design methods in that it does not require detailed a priori design criteria, but instead relies on an explicit optimal pilot-model to create desired performance. The design model, which was developed previously for fixed-wing aircraft, is simplified and modified for application to a Boeing Vertol CH-47 helicopter. Two SCAS designs are developed using the CCS design methodology. The resulting CCS designs are then compared with designs obtained using classical/frequency-domain methods and Linear Quadratic Regulator (LQR) theory in a piloted fixed-base simulation. Results indicate that the CCS method, with slight modifications, can be used to produce controller designs which compare favorably with the frequency-domain approach.

  14. Reconstruction method for running shape of rotor blade considering nonlinear stiffness and loads

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Kang, Da; Zhong, Jingjun

    2017-10-01

    The aerodynamic and centrifugal loads acting on the rotating blade make the blade configuration deformed comparing to its shape at rest. Accurate prediction of the running blade configuration plays a significant role in examining and analyzing turbomachinery performance. Considering nonlinear stiffness and loads, a reconstruction method is presented to address transformation of a rotating blade from cold to hot state. When calculating blade deformations, the blade stiffness and load conditions are updated simultaneously as blade shape varies. The reconstruction procedure is iterated till a converged hot blade shape is obtained. This method has been employed to determine the operating blade shapes of a test rotor blade and the Stage 37 rotor blade. The calculated results are compared with the experiments. The results show that the proposed method used for blade operating shape prediction is effective. The studies also show that this method can improve precision of finite element analysis and aerodynamic performance analysis.

  15. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  16. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  17. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  18. Method for revealing biases in precision mass measurements

    NASA Astrophysics Data System (ADS)

    Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.

    2013-02-01

    A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.

  19. Dosimetry and microdosimetry using COTS ICs: A comparative study

    NASA Technical Reports Server (NTRS)

    Scheick, L.; Swift, G.; Guertin, S.; Roth, D.; McNulty, P.; Nguyen, D.

    2002-01-01

    A new method using an array of MOS transistors formeasuring dose absorbed from ionizing radiation is compared to previous dosimetric methods., The accuracy and precision of dosimetry based on COTS SRAMs, DRAMs, and WPROMs are compared and contrasted. Applications of these devices in various space missions will be discussed. TID results are presented for this summary and microdosimetricresults will be added to the full paper. Finally, an analysis of the optimal condition for a digital dosimeter will be presented.

  20. Application of Reverse Transcriptase -PCR (RT-PCR) for rapid detection of viable Escherichia coli in drinking water samples.

    PubMed

    Molaee, Neda; Abtahi, Hamid; Ghannadzadeh, Mohammad Javad; Karimi, Masoude; Ghaznavi-Rad, Ehsanollah

    2015-01-01

    Polymerase chain reaction (PCR) is preferred to other methods for detecting Escherichia coli (E. coli) in water in terms of speed, accuracy and efficiency. False positive result is considered as the major disadvantages of PCR. For this reason, reverse transcriptase-polymerase chain reaction (RT-PCR) can be used to solve this problem. The aim of present study was to determine the efficiency of RT-PCR for rapid detection of viable Escherichia coli in drinking water samples and enhance its sensitivity through application of different filter membranes. Specific primers were designed for 16S rRNA and elongation Factor II genes. Different concentrations of bacteria were passed through FHLP and HAWP filters. Then, RT-PCR was performed using 16srRNA and EF -Tu primers. Contamination of 10 wells was determined by RT-PCR in Arak city. To evaluate RT-PCR efficiency, the results were compared with most probable number (MPN) method. RT-PCR is able to detect bacteria in different concentrations. Application of EF II primers reduced false positive results compared to 16S rRNA primers. The FHLP hydrophobic filters have higher ability to absorb bacteria compared with HAWB hydrophilic filters. So the use of hydrophobic filters will increase the sensitivity of RT-PCR. RT-PCR shows a higher sensitivity compared to conventional water contamination detection method. Unlike PCR, RT-PCR does not lead to false positive results. The use of EF-Tu primers can reduce the incidence of false positive results. Furthermore, hydrophobic filters have a higher ability to absorb bacteria compared to hydrophilic filters.

  1. Comparing the Similarity of Responses Received from Studies in Amazon’s Mechanical Turk to Studies Conducted Online and with Direct Recruitment

    PubMed Central

    Bartneck, Christoph; Duenser, Andreas; Moltchanova, Elena; Zawieska, Karolina

    2015-01-01

    Computer and internet based questionnaires have become a standard tool in Human-Computer Interaction research and other related fields, such as psychology and sociology. Amazon’s Mechanical Turk (AMT) service is a new method of recruiting participants and conducting certain types of experiments. This study compares whether participants recruited through AMT give different responses than participants recruited through an online forum or recruited directly on a university campus. Moreover, we compare whether a study conducted within AMT results in different responses compared to a study for which participants are recruited through AMT but which is conducted using an external online questionnaire service. The results of this study show that there is a statistical difference between results obtained from participants recruited through AMT compared to the results from the participant recruited on campus or through online forums. We do, however, argue that this difference is so small that it has no practical consequence. There was no significant difference between running the study within AMT compared to running it with an online questionnaire service. There was no significant difference between results obtained directly from within AMT compared to results obtained in the campus and online forum condition. This may suggest that AMT is a viable and economical option for recruiting participants and for conducting studies as setting up and running a study with AMT generally requires less effort and time compared to other frequently used methods. We discuss our findings as well as limitations of using AMT for empirical studies. PMID:25876027

  2. Comparing the similarity of responses received from studies in Amazon's Mechanical Turk to studies conducted online and with direct recruitment.

    PubMed

    Bartneck, Christoph; Duenser, Andreas; Moltchanova, Elena; Zawieska, Karolina

    2015-01-01

    Computer and internet based questionnaires have become a standard tool in Human-Computer Interaction research and other related fields, such as psychology and sociology. Amazon's Mechanical Turk (AMT) service is a new method of recruiting participants and conducting certain types of experiments. This study compares whether participants recruited through AMT give different responses than participants recruited through an online forum or recruited directly on a university campus. Moreover, we compare whether a study conducted within AMT results in different responses compared to a study for which participants are recruited through AMT but which is conducted using an external online questionnaire service. The results of this study show that there is a statistical difference between results obtained from participants recruited through AMT compared to the results from the participant recruited on campus or through online forums. We do, however, argue that this difference is so small that it has no practical consequence. There was no significant difference between running the study within AMT compared to running it with an online questionnaire service. There was no significant difference between results obtained directly from within AMT compared to results obtained in the campus and online forum condition. This may suggest that AMT is a viable and economical option for recruiting participants and for conducting studies as setting up and running a study with AMT generally requires less effort and time compared to other frequently used methods. We discuss our findings as well as limitations of using AMT for empirical studies.

  3. Estimating the mediating effect of different biomarkers on the relation of alcohol consumption with the risk of type 2 diabetes.

    PubMed

    Beulens, Joline W J; van der Schouw, Yvonne T; Moons, Karel G M; Boshuizen, Hendriek C; van der A, Daphne L; Groenwold, Rolf H H

    2013-04-01

    Moderate alcohol consumption is associated with a reduced type 2 diabetes risk, but the biomarkers that explain this relation are unknown. The most commonly used method to estimate the proportion explained by a biomarker is the difference method. However, influence of alcohol-biomarker interaction on its results is unclear. G-estimation method is proposed to accurately assess proportion explained, but how this method compares with the difference method is unknown. In a case-cohort study of 2498 controls and 919 incident diabetes cases, we estimated the proportion explained by different biomarkers on the relation between alcohol consumption and diabetes using the difference method and sequential G-estimation method. Using the difference method, high-density lipoprotein cholesterol explained the relation between alcohol and diabetes by 78% (95% confidence interval [CI], 41-243), whereas high-sensitivity C-reactive protein (-7.5%; -36.4 to 1.8) or blood pressure (-6.9; -26.3 to -0.6) did not explain the relation. Interaction between alcohol and liver enzymes led to bias in proportion explained with different outcomes for different levels of liver enzymes. G-estimation method showed comparable results, but proportions explained were lower. The relation between alcohol consumption and diabetes may be largely explained by increased high-density lipoprotein cholesterol but not by other biomarkers. Ignoring exposure-mediator interactions may result in bias. The difference and G-estimation methods provide similar results. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  5. Comparison of the Modified-Hodge test, Carba NP test, and carbapenem inactivation method as screening methods for carbapenemase-producing Enterobacteriaceae.

    PubMed

    Yamada, Kageto; Kashiwa, Machiko; Arai, Katsumi; Nagano, Noriyuki; Saito, Ryoichi

    2016-09-01

    We compared three screening methods for carbapenemase-producing Enterobacteriaceae. While the Modified-Hodge test and Carba NP test produced false-negative results for OXA-48-like and mucoid NDM producers, the carbapenem inactivation method (CIM) showed positive results for these isolates. Although the CIM required cultivation time, it is well suited for general clinical laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Comparison of Arterial Spin-labeling Perfusion Images at Different Spatial Normalization Methods Based on Voxel-based Statistical Analysis.

    PubMed

    Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi

    2017-01-01

    Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.

  7. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  8. On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method

    NASA Astrophysics Data System (ADS)

    Schmitz, F.; Fleck, B.

    1993-11-01

    The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.

  9. Models of convection-driven tectonic plates - A comparison of methods and results

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.

    1992-01-01

    Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.

  10. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  11. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.

  12. Evaluation of bearing capacity of piles from cone penetration test data.

    DOT National Transportation Integrated Search

    2007-12-01

    A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...

  13. METHOD FOR EVALUATING MOLD GROWTH ON CEILING TILE

    EPA Science Inventory

    A method to extract mold spores from porous ceiling tiles was developed using a masticator blender. Ceiling tiles were inoculated and analyzed using four species of mold. Statistical analysis comparing results obtained by masticator extraction and the swab method was performed. T...

  14. SELECTING SITES FOR COMPARISON WITH CREATED WETLANDS

    EPA Science Inventory

    The paper describes the method used for selecting natural wetlands to compare with created wetlands. The results of the selection process and the advantages and disadvantages of the method are discussed. The random site selection method required extensive field work and may have ...

  15. Estimations of global warming potentials from computational chemistry calculations for CH(2)F(2) and other fluorinated methyl species verified by comparison to experiment.

    PubMed

    Blowers, Paul; Hollingshead, Kyle

    2009-05-21

    In this work, the global warming potential (GWP) of methylene fluoride (CH(2)F(2)), or HFC-32, is estimated through computational chemistry methods. We find our computational chemistry approach reproduces well all phenomena important for predicting global warming potentials. Geometries predicted using the B3LYP/6-311g** method were in good agreement with experiment, although some other computational methods performed slightly better. Frequencies needed for both partition function calculations in transition-state theory and infrared intensities needed for radiative forcing estimates agreed well with experiment compared to other computational methods. A modified CBS-RAD method used to obtain energies led to superior results to all other previous heat of reaction estimates and most barrier height calculations when the B3LYP/6-311g** optimized geometry was used as the base structure. Use of the small-curvature tunneling correction and a hindered rotor treatment where appropriate led to accurate reaction rate constants and radiative forcing estimates without requiring any experimental data. Atmospheric lifetimes from theory at 277 K were indistinguishable from experimental results, as were the final global warming potentials compared to experiment. This is the first time entirely computational methods have been applied to estimate a global warming potential for a chemical, and we have found the approach to be robust, inexpensive, and accurate compared to prior experimental results. This methodology was subsequently used to estimate GWPs for three additional species [methane (CH(4)); fluoromethane (CH(3)F), or HFC-41; and fluoroform (CHF(3)), or HFC-23], where estimations also compare favorably to experimental values.

  16. Influence of the antagonist material on the wear of different composites using two different wear simulation methods.

    PubMed

    Heintze, S D; Zellweger, G; Cavalleri, A; Ferracane, J

    2006-02-01

    The aim of the study was to evaluate two ceramic materials as possible substitutes for enamel using two wear simulation methods, and to compare both methods with regard to the wear results for different materials. Flat specimens (OHSU n=6, Ivoclar n=8) of one compomer and three composite materials (Dyract AP, Tetric Ceram, Z250, experimental composite) were fabricated and subjected to wear using two different wear testing methods and two pressable ceramic materials as stylus (Empress, experimental ceramic). For the OHSU method, enamel styli of the same dimensions as the ceramic stylus were fabricated additionally. Both wear testing methods differ with regard to loading force, lateral movement of stylus, stylus dimension, number of cycles, thermocycling and abrasive medium. In the OHSU method, the wear facets (mean vertical loss) were measured using a contact profilometer, while in the Ivoclar method (maximal vertical loss) a laser scanner was used for this purpose. Additionally, the vertical loss of the ceramic stylus was quantified for the Ivoclar method. The results obtained from each method were compared by ANOVA and Tukey's test (p<0.05). To compare both wear methods, the log-transformed data were used to establish relative ranks between material/stylus combinations and assessed by applying the Pearson correlation coefficient. The experimental ceramic material generated significantly less wear in Tetric Ceram and Z250 specimens compared to the Empress stylus in the Ivoclar method, whereas with the OHSU method, no difference between the two ceramic antagonists was found with regard to abrasion or attrition. The wear generated by the enamel stylus was not statistically different from that generated by the other two ceramic materials in the OHSU method. With the Ivoclar method, wear of the ceramic stylus was only statistically different when in contact with Tetric Ceram. There was a close correlation between the attrition wear of the OHSU and the wear of the Ivoclar method (Pearson coefficient 0.83, p=0.01). Pressable ceramic materials can be used as a substitute for enamel in wear testing machines. However, material ranking may be affected by the type of ceramic material chosen. The attrition wear of the OHSU method was comparable with the wear generated with the Ivoclar method.

  17. Prediction of welding shrinkage deformation of bridge steel box girder based on wavelet neural network

    NASA Astrophysics Data System (ADS)

    Tao, Yulong; Miao, Yunshui; Han, Jiaqi; Yan, Feiyun

    2018-05-01

    Aiming at the low accuracy of traditional forecasting methods such as linear regression method, this paper presents a prediction method for predicting the relationship between bridge steel box girder and its displacement with wavelet neural network. Compared with traditional forecasting methods, this scheme has better local characteristics and learning ability, which greatly improves the prediction ability of deformation. Through analysis of the instance and found that after compared with the traditional prediction method based on wavelet neural network, the rigid beam deformation prediction accuracy is higher, and is superior to the BP neural network prediction results, conform to the actual demand of engineering design.

  18. A comparative study on effect of e-learning and instructor-led methods on nurses’ documentation competency

    PubMed Central

    Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas

    2011-01-01

    BACKGROUND: Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses’ skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses’ care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. METHODS: This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). RESULTS: The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses’ documentation competency, it can be a qualified substitute for traditional instructor-led method. CONCLUSIONS: E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs. PMID:22224113

  19. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  20. DEVELOPMENT AND VALIDATION OF BROMATOMETRIC, DIAZOTIZATION AND VIS-SPECTROPHOTOMETRIC METHODS FOR THE DETERMINATION OF MESALAZINE IN PHARMACEUTICAL FORMULATION.

    PubMed

    Zawada, Elzabieta; Pirianowicz-Chaber, Elzabieta; Somogi, Aleksander; Pawinski, Tomasz

    2017-03-01

    Three new methods were developed for the quantitative determination of mesalazine in the form of the pure substance or in the form of suppositories and tablets - accordingly: bromatometric, diazotization and visible light spectrophotometry method. Optimizing the time and the temperature of the bromination reaction (50⁰C, 50 min) 4-amino-2,3,5,6-tetrabromophenol was obtained. The results obtained were reproducible, accurate and precise. Developed methods were compared to the pharmacopoeial approach - alkalimetry in an aqueous medium. The validation parameters of all methods were comparable. Developed methods for quantification of mesalazine are a viable alternative to other more expensive approaches.

  1. The calculation of viscosity of liquid n-decane and n-hexadecane by the Green-Kubo method

    NASA Astrophysics Data System (ADS)

    Cui, S. T.; Cummings, P. T.; Cochran, H. D.

    This short commentary presents the result of long molecular dynamics simulation calculations of the shear viscosity of liquid n-decane and n-hexadecane using the Green-Kubo integration method. The relaxation time of the stress-stress correlation function is compared with those of rotation and diffusion. The rotational and diffusional relaxation times, which are easy to calculate, provide useful guides for the required simulation time in viscosity calculations. Also, the computational time required for viscosity calculations of these systems by the Green-Kubo method is compared with the time required for previous non-equilibrium molecular dynamics calculations of the same systems. The method of choice for a particular calculation is determined largely by the properties of interest, since the efficiencies of the two methods are comparable for calculation of the zero strain rate viscosity.

  2. Comparing Methods for Dynamic Airspace Configuration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Lai, Chok Fung

    2011-01-01

    This paper compares airspace design solutions for dynamically reconfiguring airspace in response to nominal daily traffic volume fluctuation. Airspace designs from seven algorithmic methods and a representation of current day operations in Kansas City Center were simulated with two times today's demand traffic. A three-configuration scenario was used to represent current day operations. Algorithms used projected unimpeded flight tracks to design initial 24-hour plans to switch between three configurations at predetermined reconfiguration times. At each reconfiguration time, algorithms used updated projected flight tracks to update the subsequent planned configurations. Compared to the baseline, most airspace design methods reduced delay and increased reconfiguration complexity, with similar traffic pattern complexity results. Design updates enabled several methods to as much as half the delay from their original designs. Freeform design methods reduced delay and increased reconfiguration complexity the most.

  3. Use of continuous and grab sample data for calculating total maximum daily load (TMDL) in agricultural watersheds.

    PubMed

    Gulati, Shelly; Stubblefield, Ashley A; Hanlon, Jeremy S; Spier, Chelsea L; Stringfellow, William T

    2014-03-01

    Measuring the discharge of diffuse pollution from agricultural watersheds presents unique challenges. Flows in agricultural watersheds, particularly in Mediterranean climates, can be predominately irrigation runoff and exhibit large diurnal fluctuation in both volume and concentration. Flow and pollutant concentrations in these smaller watersheds dominated by human activity do not conform to a normal distribution and it is not clear if parametric methods are appropriate or accurate for load calculations. The objective of this study was to compare the accuracy of five load estimation methods to calculate pollutant loads from agricultural watersheds. Calculation of loads using results from discrete (grab) samples was compared with the true-load computed using in situ continuous monitoring measurements. A new method is introduced that uses a non-parametric measure of central tendency (the median) to calculate loads (median-load). The median-load method was compared to more commonly used parametric estimation methods which rely on using the mean as a measure of central tendency (mean-load and daily-load), a method that utilizes the total flow volume (volume-load), and a method that uses measure of flow at the time of sampling (instantaneous-load). Using measurements from ten watersheds in the San Joaquin Valley of California, the average percent error compared to the true-load for total dissolved solids (TDS) was 7.3% for the median-load, 6.9% for the mean-load, 6.9% for the volume-load, 16.9% for the instantaneous-load, and 18.7% for the daily-load methods of calculation. The results of this study show that parametric methods are surprisingly accurate, even for data that have starkly non-normal distributions and are highly skewed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Analytical analysis and implementation of a low-speed high-torque permanent magnet vernier in-wheel motor for electric vehicle

    NASA Astrophysics Data System (ADS)

    Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili

    2012-04-01

    In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.

  5. Use of the landmark method to address immortal person-time bias in comparative effectiveness research: a simulation study.

    PubMed

    Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Lai, Edward Chia-Cheng; Setoguchi, Soko

    2016-11-20

    Observational comparative effectiveness and safety studies are often subject to immortal person-time, a period of follow-up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel-Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel-Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person-time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow-up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Comparing methods of measuring geographic patterns in temporal trends: an application to county-level heart disease mortality in the United States, 1973 to 2010.

    PubMed

    Vaughan, Adam S; Kramer, Michael R; Waller, Lance A; Schieb, Linda J; Greer, Sophia; Casper, Michele

    2015-05-01

    To demonstrate the implications of choosing analytical methods for quantifying spatiotemporal trends, we compare the assumptions, implementation, and outcomes of popular methods using county-level heart disease mortality in the United States between 1973 and 2010. We applied four regression-based approaches (joinpoint regression, both aspatial and spatial generalized linear mixed models, and Bayesian space-time model) and compared resulting inferences for geographic patterns of local estimates of annual percent change and associated uncertainty. The average local percent change in heart disease mortality from each method was -4.5%, with the Bayesian model having the smallest range of values. The associated uncertainty in percent change differed markedly across the methods, with the Bayesian space-time model producing the narrowest range of variance (0.0-0.8). The geographic pattern of percent change was consistent across methods with smaller declines in the South Central United States and larger declines in the Northeast and Midwest. However, the geographic patterns of uncertainty differed markedly between methods. The similarity of results, including geographic patterns, for magnitude of percent change across these methods validates the underlying spatial pattern of declines in heart disease mortality. However, marked differences in degree of uncertainty indicate that Bayesian modeling offers substantially more precise estimates. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI

    NASA Technical Reports Server (NTRS)

    Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.

    2001-01-01

    Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.

  8. A method for removing arm backscatter from EPID images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Brian W.; Greer, Peter B.; School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, New South Wales 2308

    2013-07-15

    Purpose: To develop a method for removing the support arm backscatter from images acquired using current Varian electronic portal imaging devices (EPIDs).Methods: The effect of arm backscatter on EPID images was modeled using a kernel convolution method. The parameters of the model were optimized by comparing on-arm images to off-arm images. The model was used to develop a method to remove the effect of backscatter from measured EPID images. The performance of the backscatter removal method was tested by comparing backscatter corrected on-arm images to measured off-arm images for 17 rectangular fields of different sizes and locations on the imager.more » The method was also tested using on- and off-arm images from 42 intensity modulated radiotherapy (IMRT) fields.Results: Images generated by the backscatter removal method gave consistently better agreement with off-arm images than images without backscatter correction. For the 17 rectangular fields studied, the root mean square difference of in-plane profiles compared to off-arm profiles was reduced from 1.19% (standard deviation 0.59%) on average without backscatter removal to 0.38% (standard deviation 0.18%) when using the backscatter removal method. When comparing to the off-arm images from the 42 IMRT fields, the mean {gamma} and percentage of pixels with {gamma} < 1 were improved by the backscatter removal method in all but one of the images studied. The mean {gamma} value (1%, 1 mm) for the IMRT fields studied was reduced from 0.80 to 0.57 by using the backscatter removal method, while the mean {gamma} pass rate was increased from 72.2% to 84.6%.Conclusions: A backscatter removal method has been developed to estimate the image acquired by the EPID without any arm backscatter from an image acquired in the presence of arm backscatter. The method has been shown to produce consistently reliable results for a wide range of field sizes and jaw configurations.« less

  9. A new comparison method for dew-point generators

    NASA Astrophysics Data System (ADS)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  10. Bacteriophage-based nanoprobes for rapid bacteria separation

    NASA Astrophysics Data System (ADS)

    Chen, Juhong; Duncan, Bradley; Wang, Ziyuan; Wang, Li-Sheng; Rotello, Vincent M.; Nugen, Sam R.

    2015-10-01

    The lack of practical methods for bacterial separation remains a hindrance for the low-cost and successful development of rapid detection methods from complex samples. Antibody-tagged magnetic particles are commonly used to pull analytes from a liquid sample. While this method is well-established, improvements in capture efficiencies would result in an increase of the overall detection assay performance. Bacteriophages represent a low-cost and more consistent biorecognition element as compared to antibodies. We have developed nanoscale bacteriophage-tagged magnetic probes, where T7 bacteriophages were bound to magnetic nanoparticles. The nanoprobe allowed the specific recognition and attachment to E. coli cells. The phage magnetic nanprobes were directly compared to antibody-conjugated magnetic nanoprobes. The capture efficiencies of bacteriophages and antibodies on nanoparticles for the separation of E. coli K12 at varying concentrations were determined. The results indicated a similar bacteria capture efficiency between the two nanoprobes.The lack of practical methods for bacterial separation remains a hindrance for the low-cost and successful development of rapid detection methods from complex samples. Antibody-tagged magnetic particles are commonly used to pull analytes from a liquid sample. While this method is well-established, improvements in capture efficiencies would result in an increase of the overall detection assay performance. Bacteriophages represent a low-cost and more consistent biorecognition element as compared to antibodies. We have developed nanoscale bacteriophage-tagged magnetic probes, where T7 bacteriophages were bound to magnetic nanoparticles. The nanoprobe allowed the specific recognition and attachment to E. coli cells. The phage magnetic nanprobes were directly compared to antibody-conjugated magnetic nanoprobes. The capture efficiencies of bacteriophages and antibodies on nanoparticles for the separation of E. coli K12 at varying concentrations were determined. The results indicated a similar bacteria capture efficiency between the two nanoprobes. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03779d

  11. Comparative methods for the analysis of gene-expression evolution: an example using yeast functional genomic data.

    PubMed

    Oakley, Todd H; Gu, Zhenglong; Abouheif, Ehab; Patel, Nipam H; Li, Wen-Hsiung

    2005-01-01

    Understanding the evolution of gene function is a primary challenge of modern evolutionary biology. Despite an expanding database from genomic and developmental studies, we are lacking quantitative methods for analyzing the evolution of some important measures of gene function, such as gene-expression patterns. Here, we introduce phylogenetic comparative methods to compare different models of gene-expression evolution in a maximum-likelihood framework. We find that expression of duplicated genes has evolved according to a nonphylogenetic model, where closely related genes are no more likely than more distantly related genes to share common expression patterns. These results are consistent with previous studies that found rapid evolution of gene expression during the history of yeast. The comparative methods presented here are general enough to test a wide range of evolutionary hypotheses using genomic-scale data from any organism.

  12. Instrumental variable methods in comparative safety and effectiveness research†

    PubMed Central

    Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian

    2010-01-01

    Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968

  13. Posture Detection Based on Smart Cushion for Wheelchair Users

    PubMed Central

    Ma, Congcong; Li, Wenfeng; Gravina, Raffaele; Fortino, Giancarlo

    2017-01-01

    The postures of wheelchair users can reveal their sitting habit, mood, and even predict health risks such as pressure ulcers or lower back pain. Mining the hidden information of the postures can reveal their wellness and general health conditions. In this paper, a cushion-based posture recognition system is used to process pressure sensor signals for the detection of user’s posture in the wheelchair. The proposed posture detection method is composed of three main steps: data level classification for posture detection, backward selection of sensor configuration, and recognition results compared with previous literature. Five supervised classification techniques—Decision Tree (J48), Support Vector Machines (SVM), Multilayer Perceptron (MLP), Naive Bayes, and k-Nearest Neighbor (k-NN)—are compared in terms of classification accuracy, precision, recall, and F-measure. Results indicate that the J48 classifier provides the highest accuracy compared to other techniques. The backward selection method was used to determine the best sensor deployment configuration of the wheelchair. Several kinds of pressure sensor deployments are compared and our new method of deployment is shown to better detect postures of the wheelchair users. Performance analysis also took into account the Body Mass Index (BMI), useful for evaluating the robustness of the method across individual physical differences. Results show that our proposed sensor deployment is effective, achieving 99.47% posture recognition accuracy. Our proposed method is very competitive for posture recognition and robust in comparison with other former research. Accurate posture detection represents a fundamental basic block to develop several applications, including fatigue estimation and activity level assessment. PMID:28353684

  14. Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys

    PubMed Central

    DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A

    2014-01-01

    ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598

  15. Generalizing observational study results: applying propensity score methods to complex surveys.

    PubMed

    Dugoff, Eva H; Schuler, Megan; Stuart, Elizabeth A

    2014-02-01

    To provide a tutorial for using propensity score methods with complex survey data. Simulated data and the 2008 Medical Expenditure Panel Survey. Using simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. In general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. Propensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher's goal. © Health Research and Educational Trust.

  16. Filter method without boundary-value condition for simultaneous calculation of eigenfunction and eigenvalue of a stationary Schrödinger equation on a grid.

    PubMed

    Nurhuda, M; Rouf, A

    2017-09-01

    The paper presents a method for simultaneous computation of eigenfunction and eigenvalue of the stationary Schrödinger equation on a grid, without imposing boundary-value condition. The method is based on the filter operator, which selects the eigenfunction from wave packet at the rate comparable to δ function. The efficacy and reliability of the method are demonstrated by comparing the simulation results with analytical or numerical solutions obtained by using other methods for various boundary-value conditions. It is found that the method is robust, accurate, and reliable. Further prospect of filter method for simulation of the Schrödinger equation in higher-dimensional space will also be highlighted.

  17. The use of QSAR methods for determination of n-octanol/water partition coefficient using the example of hydroxyester HE-1

    NASA Astrophysics Data System (ADS)

    Guziałowska-Tic, Joanna

    2017-10-01

    According to the Directive of the European Parliament and of the Council concerning the protection of animals used for scientific purposes, the number of experiments involving the use of animals needs to be reduced. The methods which can replace animal testing include computational prediction methods, for instance, the quantitative structure-activity relationships (QSAR). These methods are designed to find a cohesive relationship between differences in the values of the properties of molecules and the biological activity of a series of test compounds. This paper compares the results of the author's own results of examination on the n-octanol/water coefficient for the hydroxyester HE-1 with those generated by means of three models: Kowwin, MlogP, AlogP. The test results indicate that, in the case of molecular similarity, the highest determination coefficient was obtained for the model MlogP and the lowest root-mean square error was obtained for the Kowwin method. When comparing the mean logP value obtained using the QSAR models with the value resulting from the author's own experiments, it was observed that the best conformity was that recorded for the model AlogP, where relative error was 15.2%.

  18. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  19. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  20. Spectral methods and their implementation to solution of aerodynamic and fluid mechanic problems

    NASA Technical Reports Server (NTRS)

    Streett, C. L.

    1987-01-01

    Fundamental concepts underlying spectral collocation methods, especially pertaining to their use in the solution of partial differential equations, are outlined. Theoretical accuracy results are reviewed and compared with results from test problems. A number of practical aspects of the construction and use of spectral methods are detailed, along with several solution schemes which have found utility in applications of spectral methods to practical problems. Results from a few of the successful applications of spectral methods to problems of aerodynamic and fluid mechanic interest are then outlined, followed by a discussion of the problem areas in spectral methods and the current research under way to overcome these difficulties.

  1. Evaluation of the pulse-contour method of determining stroke volume in man.

    NASA Technical Reports Server (NTRS)

    Alderman, E. L.; Branzi, A.; Sanders, W.; Brown, B. W.; Harrison, D. C.

    1972-01-01

    The pulse-contour method for determining stroke volume has been employed as a continuous rapid method of monitoring the cardiovascular status of patients. Twenty-one patients with ischemic heart disease and 21 patients with mitral valve disease were subjected to a variety of hemodynamic interventions. The pulse-contour estimations, using three different formulas derived by Warner, Kouchoukos, and Herd, were compared with indicator-dilution outputs. A comparison of the results of the two methods for determining stroke volume yielded correlation coefficients ranging from 0.59 to 0.84. The better performing Warner formula yielded a coefficient of variation of about 20%. The type of hemodynamic interventions employed did not significantly affect the results using the pulse-contour method. Although the correlation of the pulse-contour and indicator-dilution stroke volumes is high, the coefficient of variation is such that small changes in stroke volume cannot be accurately assessed by the pulse-contour method. However, the simplicity and rapidity of this method compared to determination of cardiac output by Fick or indicator-dilution methods makes it a potentially useful adjunct for monitoring critically ill patients.

  2. Microwave-assisted extraction of lipid from fish waste

    NASA Astrophysics Data System (ADS)

    Rahimi, M. A.; Omar, R.; Ethaib, S.; Siti Mazlina, M. K.; Awang Biak, D. R.; Nor Aisyah, R.

    2017-06-01

    Processing fish waste for extraction of value added products such as protein, lipid, gelatin, amino acids, collagen and oil has become one of the most intriguing researches due to its valuable properties. In this study the extraction of lipid from sardine fish waste was carried out using microwave-assisted extraction (MAE) and compared with Soxhlets and Hara and Radin methods. A mixture of two organic solvents isopropanol/hexane and distilled water were used for MAE and Hara and Radin methods. Meanwhile, Soxhlet method utilized only hexane as solvent. The results show that the higher yield of lipid 80.5 mg/g was achieved using distilled water in MAE method at 10 min extraction time. Soxhlet extraction method only produced 46.6 mg/g of lipid after 4 hours of extraction time. Lowest yield of lipid was found at 15.8 mg/g using Hara and Radin method. Based on aforementioned results, it can be concluded MAE method is superior compared to the Soxhlet and Hara and Radin methods which make it an attractive route to extract lipid from fish waste.

  3. Measuring the motor output of the pontomedullary reticular formation in the monkey: do stimulus-triggered averaging and stimulus trains produce comparable results in the upper limbs?

    PubMed

    Herbert, Wendy J; Davidson, Adam G; Buford, John A

    2010-06-01

    The pontomedullary reticular formation (PMRF) of the monkey produces motor outputs to both upper limbs. EMG effects evoked from stimulus-triggered averaging (StimulusTA) were compared with effects from stimulus trains to determine whether both stimulation methods produced comparable results. Flexor and extensor muscles of scapulothoracic, shoulder, elbow, and wrist joints were studied bilaterally in two male M. fascicularis monkeys trained to perform a bilateral reaching task. The frequency of facilitation versus suppression responses evoked in the muscles was compared between methods. Stimulus trains were more efficient (94% of PMRF sites) in producing responses than StimulusTA (55%), and stimulus trains evoked responses from more muscles per site than from StimulusTA. Facilitation (72%) was more common from stimulus trains than StimulusTA (39%). In the overall results, a bilateral reciprocal activation pattern of ipsilateral flexor and contralateral extensor facilitation was evident for StimulusTA and stimulus trains. When the comparison was restricted to cases where both methods produced a response in a given muscle from the same site, agreement was very high, at 80%. For the remaining 20%, discrepancies were accounted for mainly by facilitation from stimulus trains when StimulusTA produced suppression, which was in agreement with the under-representation of suppression in the stimulus train data as a whole. To the extent that the stimulus train method may favor transmission through polysynaptic pathways, these results suggest that polysynaptic pathways from the PMRF more often produce facilitation in muscles that would typically demonstrate suppression with StimulusTA.

  4. SEMI-VOLATILE SECONDARY AEROSOLS IN URBAN ATMOSPHERES: MEETING A MEASURED CHALLENGE

    EPA Science Inventory

    This presentation compares the results from various particle measurement methods as they relate to semi-volatile secondary aerosols in urban atmospheres. The methods include the PM2.5 Federal Reference Method; Particle Concentrator - BYU Organic Sampling System (PC-BOSS); the Re...

  5. 42 CFR 493.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... throughout the reportable range for patient test results. Challenge means, for quantitative tests, an... requirement. Target value for quantitative tests means either the mean of all participant responses after... scientific protocol, a comparative method or a method group (“peer” group) may be used. If the method group...

  6. 42 CFR 493.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... throughout the reportable range for patient test results. Challenge means, for quantitative tests, an... requirement. Target value for quantitative tests means either the mean of all participant responses after... scientific protocol, a comparative method or a method group (“peer” group) may be used. If the method group...

  7. 42 CFR 493.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... throughout the reportable range for patient test results. Challenge means, for quantitative tests, an... requirement. Target value for quantitative tests means either the mean of all participant responses after... scientific protocol, a comparative method or a method group (“peer” group) may be used. If the method group...

  8. 42 CFR 493.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... throughout the reportable range for patient test results. Challenge means, for quantitative tests, an... requirement. Target value for quantitative tests means either the mean of all participant responses after... scientific protocol, a comparative method or a method group (“peer” group) may be used. If the method group...

  9. 42 CFR 493.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... throughout the reportable range for patient test results. Challenge means, for quantitative tests, an... requirement. Target value for quantitative tests means either the mean of all participant responses after... scientific protocol, a comparative method or a method group (“peer” group) may be used. If the method group...

  10. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  11. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  12. Reproducibility of techniques using Archimedes' principle in measuring cancellous bone volume.

    PubMed

    Zou, L; Bloebaum, R D; Bachus, K N

    1997-01-01

    Researchers have been interested in developing techniques to accurately and reproducibly measure the volume fraction of cancellous bone. Historically bone researchers have used Archimedes' principle with water to measure the volume fraction of cancellous bone. Preliminary results in our lab suggested that the calibrated water technique did not provide reproducible results. Because of this difficulty, it was decided to compare the conventional water method to a water with surfactant and a helium method using a micropycnometer. The water/surfactant and the helium methods were attempts to improve the fluid penetration into the small voids present in the cancellous bone structure. In order to compare the reproducibility of the new methods with the conventional water method, 16 cancellous bone specimens were obtained from femoral condyles of human and greyhound dog femora. The volume fraction measurements on each specimen were repeated three times with all three techniques. The results showed that the helium displacement method was more than an order of magnitudes more reproducible than the two other water methods (p < 0.05). Statistical analysis also showed that the conventional water method produced the lowest reproducibility (p < 0.05). The data from this study indicate that the helium displacement technique is a very useful, rapid and reproducible tool for quantitatively characterizing anisotropic porous tissue structures such as cancellous bone.

  13. Maritime Search and Rescue via Multiple Coordinated UAS

    DTIC Science & Technology

    2017-06-12

    performed by a set of UAS. Our investigation covers the detection of multiple mobile objects by a heterogeneous collection of UAS. Three methods (two...account for contingencies such as airspace deconfliction. Results are produced using simulation to verify the capability of the proposed method and to...compare the various par- titioning methods . Results from this simulation show that great gains in search efficiency can be made when the search space is

  14. Information-theoretic indices usage for the prediction and calculation of octanol-water partition coefficient.

    PubMed

    Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara

    2007-01-01

    The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.

  15. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhao; Gao, Kun; Chen, Jian

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using themore » error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.« less

  16. KRAS mutation testing in colorectal cancer: comparison of the results obtained using 3 different methods for the analysis of codons G12 and G13.

    PubMed

    Bihl, Michel P; Hoeller, Sylvia; Andreozzi, Maria Carla; Foerster, Anja; Rufle, Alexander; Tornillo, Luigi; Terracciano, Luigi

    2012-03-01

    Targeting the epidermal growth factor receptor (EGFR) is a new therapeutic option for patients with metastatic colorectal or lung carcinoma. However, the therapy efficiency highly depends on the KRAS mutation status in the given tumour. Therefore a reliable and secure KRAS mutation testing is crucial. Here we investigated 100 colorectal carcinoma samples with known KRAS mutation status (62 mutated cases and 38 wild type cases) in a comparative manner with three different KRAS mutation testing techniques (Pyrosequencing, Dideoxysequencing and INFINITI) in order to test their reliability and sensitivity. For the large majority of samples (96/100, 96%), the KRAS mutation status obtained by all three methods was the same. Only two cases with clear discrepancies were observed. One case was reported as wild type by the INFINITI method while the two other methods detected a G13C mutation. In the second case the mutation could be detected by the Pyrosequencing and INFINITI method (15% and 15%), while no signal for mutation could be observed with the Dideoxysequencing method. Additional two unclear results were due to a detection of a G12V with the INFINITI method, which was below cut-off when repeated and which was not detectable by the other two methods and very weak signals in a G12V mutated case with the Dideoxy- and Pyroseqencing method compared to the INFINITI method, respectively. In summary all three methods are reliable and robust methods in detecting KRAS mutations. INFINITI, however seems to be slightly more sensitive compared to Dideoxy- and Pyrosequencing.

  17. Comparative study on diagonal equivalent methods of masonry infill panel

    NASA Astrophysics Data System (ADS)

    Amalia, Aniendhita Rizki; Iranata, Data

    2017-06-01

    Infrastructure construction in earthquake prone area needs good design process, including modeling a structure in a correct way to reduce damages caused by an earthquake. Earthquakes cause many damages e.g. collapsed buildings that are dangerous. An incorrect modeling in design process certainly affects the structure's ability in responding to load, i.e. an earthquake load, and it needs to be paid attention to in order to reduce damages and fatalities. A correct modeling considers every aspect that affects the strength of a building, including stiffness of resisting lateral loads caused by an earthquake. Most of structural analyses still use open frame method that does not consider the effect of stiffness of masonry panel to the stiffness and strength of the whole structure. Effect of masonry panel is usually not included in design process, but the presence of this panel greatly affects behavior of the building in responding to an earthquake. In worst case scenario, it can even cause the building to collapse as what has been reported after great earthquakes worldwide. Modeling a structure with masonry panel as consideration can be performed by designing the panel as compression brace or shell element. In designing masonry panel as a compression brace, there are fourteen methods popular to be used by structure designers formulated by Saneinejad-Hobbs, Holmes, Stafford-Smith, Mainstones, Mainstones-Weeks, Bazan-Meli, Liauw Kwan, Paulay and Priestley, FEMA 356, Durani Luo, Hendry, Al-Chaar, Papia and Chen-Iranata. Every method has its own equation and parameters to use, therefore the model of every method was compared to results of experimental test to see which one gives closer values. Moreover, those methods also need to be compared to the open frame to see if they can result values within limits. Experimental test that was used in comparing all methods was taken from Mehrabi's research (Fig. 1), which was a prototype of a frame in a structure with 0.5 scale and the ratio of height to width of 1 to 1.5. Load used in the experiment was based on Uniform Building Code (UBC) 1991. Every method compared was calculated first to get equivalent diagonal strut width. The second step was modelling method using structure analysis software as a frame with a diagonal in a linear mode. The linear mode was chosen based on structure analysis commonly used by structure designers. The frame was loaded and for every model, its load and deformation values were identified. The values of load - deformation of every method were compared to those of experimental test specimen by Mehrabi and open frame. From comparative study performed, Holmes' and Bazan-Meli's equations gave results the closest to the experimental test specimen by Mehrabi. Other equations that gave close values within the limit (by comparing it to the open frame) are Saneinejad-Hobbs, Stafford-Smith, Bazan-Meli, Liauw Kwan, Paulay and Priestley, FEMA 356, Durani Luo, Hendry, Papia and Chen-Iranata.

  18. Efficient flow injection and sequential injection methods for spectrophotometric determination of oxybenzone in sunscreens based on reaction with Ni(II).

    PubMed

    Chisvert, A; Salvador, A; Pascual-Martí, M C; March, J G

    2001-04-01

    Spectrophotometric determination of a widely used UV-filter, such as oxybenzone, is proposed. The method is based on the complexation reaction between oxybenzone and Ni(II) in ammoniacal medium. The stoichiometry of the reaction, established by the Job method, was 1:1. Reaction conditions were studied and the experimental parameters were optimized, for both flow injection (FI) and sequential injection (SI) determinations, with comparative purposes. Sunscreen formulations containing oxybenzone were analyzed by the proposed methods and results compared with those obtained by HPLC. Data show that both FI and SI procedures provide accurate and precise results. The ruggedness, sensitivity and LOD are adequate to the analysis requirements. The sample frequency obtained by FI is three-fold higher than that of SI analysis. SI is less reagent-consuming than FI.

  19. Application of the three-dimensional aperiodic Fourier modal method using arc elements in curvilinear coordinates.

    PubMed

    Bucci, Davide; Martin, Bruno; Morand, Alain

    2012-03-01

    This paper deals with a full vectorial generalization of the aperiodic Fourier modal method (AFMM) in cylindrical coordinates. The goal is to predict some key characteristics such as the bending losses of waveguides having an arbitrary distribution of the transverse refractive index. After a description of the method, we compare the results of the cylindrical coordinates AFMM with simulations by the finite-difference time-domain (FDTD) method performed on an S-bend structure made by a 500 nm × 200 nm silicon core (n=3.48) in silica (n=1.44) at a wavelength λ=1550 nm, the bending radius varying from 0.5 up to 2 μm. The FDTD and AFMM results show differences comparable to the variations obtained by changing the parameters of the FDTD simulations.

  20. An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.

    NASA Technical Reports Server (NTRS)

    Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.

    1973-01-01

    A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.

  1. Comparative performance evaluation of automated segmentation methods of hippocampus from magnetic resonance images of temporal lobe epilepsy patients

    PubMed Central

    Hosseini, Mohammad-Parsa; Nazem-Zadeh, Mohammad-Reza; Pompili, Dario; Jafari-Khouzani, Kourosh; Elisevich, Kost; Soltanian-Zadeh, Hamid

    2016-01-01

    Purpose: Segmentation of the hippocampus from magnetic resonance (MR) images is a key task in the evaluation of mesial temporal lobe epilepsy (mTLE) patients. Several automated algorithms have been proposed although manual segmentation remains the benchmark. Choosing a reliable algorithm is problematic since structural definition pertaining to multiple edges, missing and fuzzy boundaries, and shape changes varies among mTLE subjects. Lack of statistical references and guidance for quantifying the reliability and reproducibility of automated techniques has further detracted from automated approaches. The purpose of this study was to develop a systematic and statistical approach using a large dataset for the evaluation of automated methods and establish a method that would achieve results better approximating those attained by manual tracing in the epileptogenic hippocampus. Methods: A template database of 195 (81 males, 114 females; age range 32–67 yr, mean 49.16 yr) MR images of mTLE patients was used in this study. Hippocampal segmentation was accomplished manually and by two well-known tools (FreeSurfer and hammer) and two previously published methods developed at their institution [Automatic brain structure segmentation (ABSS) and LocalInfo]. To establish which method was better performing for mTLE cases, several voxel-based, distance-based, and volume-based performance metrics were considered. Statistical validations of the results using automated techniques were compared with the results of benchmark manual segmentation. Extracted metrics were analyzed to find the method that provided a more similar result relative to the benchmark. Results: Among the four automated methods, ABSS generated the most accurate results. For this method, the Dice coefficient was 5.13%, 14.10%, and 16.67% higher, Hausdorff was 22.65%, 86.73%, and 69.58% lower, precision was 4.94%, −4.94%, and 12.35% higher, and the root mean square (RMS) was 19.05%, 61.90%, and 65.08% lower than LocalInfo, FreeSurfer, and hammer, respectively. The Bland–Altman similarity analysis revealed a low bias for the ABSS and LocalInfo techniques compared to the others. Conclusions: The ABSS method for automated hippocampal segmentation outperformed other methods, best approximating what could be achieved by manual tracing. This study also shows that four categories of input data can cause automated segmentation methods to fail. They include incomplete studies, artifact, low signal-to-noise ratio, and inhomogeneity. Different scanner platforms and pulse sequences were considered as means by which to improve reliability of the automated methods. Other modifications were specially devised to enhance a particular method assessed in this study. PMID:26745947

  2. Non destructive testing of works of art by terahertz analysis

    NASA Astrophysics Data System (ADS)

    Bodnar, Jean-Luc; Metayer, Jean-Jacques; Mouhoubi, Kamel; Detalle, Vincent

    2013-11-01

    Improvements in technologies and the growing security needs in airport terminals lead to the development of non destructive testing devices using terahertz waves. Indeed, these waves have the advantage of being, on one hand, relatively penetrating. They also have the asset of not being ionizing. It is thus potentially an interesting contribution in the non destructive testing field. With the help of the VISIOM Company, the possibilities of this new industrial analysis method in assisting the restoration of works of art were then approached. The results obtained within this framework are presented here and compared with those obtained by infrared thermography. The results obtained show first that the THZ method, like the stimulated infrared thermography allows the detection of delamination located in murals paintings or in marquetries. They show then that the THZ method seems to allow detecting defects located relatively deeply (10 mm) and defects potentially concealed by other defects. It is an advantage compared to the stimulated infra-red thermography which does not make it possible to obtain these results. Furthermore, they show that the method does not seem sensitive to the various pigments constituting the pictorial layer, to the presence of a layer of "Japan paper" and to the presence of a layer of whitewash. It is not the case of the stimulated infrared thermography. It is another advantage of the THZ method. Finally, they show that the THZ method is limited in the detection of low-size defects. It is a disadvantage compared to the stimulated infrared thermography.

  3. Comparison of Control Group Generating Methods.

    PubMed

    Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes

    2017-01-01

    Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.

  4. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity

  5. A Modified Kirchhoff plate theory for Free Vibration analysis of functionally graded material plates using meshfree method

    NASA Astrophysics Data System (ADS)

    Nguyen Van Do, Vuong

    2018-04-01

    In this paper, a modified Kirchhoff theory is presented for free vibration analyses of functionally graded material (FGM) plate based on modified radial point interpolation method (RPIM). The shear deformation effects are taken account into modified theory to ignore the locking phenomenon of thin plates. Due to the proposed refined plate theory, the number of independent unknowns reduces one variable and exists with four degrees of freedom per node. The simulated free vibration results employed by the modified RPIM are compared with the other analytical solutions to verify the effectiveness and the accuracy of the developed mesh-free method. Detail parametric studies of the proposed method are then conducted including the effectiveness of thickness ratio, boundary condition and material inhomogeneity on the sample problems of square plates. Results illustrated that the modified mesh-free RPIM can effectively predict the numerical calculation as compared to the exact solutions. The obtained numerical results are indicated that the proposed method are stable and well accurate prediction to evaluate with other published analyses.

  6. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The Effect of Laminar Flow on Rotor Hover Performance

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.; Martin, Preston B.

    2017-01-01

    The topic of laminar flow effects on hover performance is introduced with respect to some historical efforts where laminar flow was either measured or attempted. An analysis method is outlined using combined blade element, momentum method coupled to an airfoil analysis method, which includes the full e(sup N) transition model. The analysis results compared well with the measured hover performance including the measured location of transition on both the upper and lower blade surfaces. The analysis method is then used to understand the upper limits of hover efficiency as a function of disk loading. The impact of laminar flow is higher at low disk loading, but significant improvement in terms of power loading appears possible even up to high disk loading approaching 20 ps f. A optimum planform design equation is derived for cases of zero profile drag and finite drag levels. These results are intended to be a guide for design studies and as a benchmark to compare higher fidelity analysis results. The details of the analysis method are given to enable other researchers to use the same approach for comparison to other approaches.

  8. The Mixed Finite Element Multigrid Method for Stokes Equations

    PubMed Central

    Muzhinji, K.; Shateyi, S.; Motsa, S. S.

    2015-01-01

    The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361

  9. [Impacts of collaborative teaching method on the teaching achievement of Acupuncture and Moxibustion].

    PubMed

    Tian, Haomei; Shen, Jing; Shi, Jia; Liu, Mi; Wang, Chao; Liu, Jinzhi; Chen, Chutao

    2016-11-12

    To explore the impacts of collaborative teaching method on the teaching achievement of Acupuncture and Moxibustion . Six classes in Hunan University of CM of 2012 grade Chinese medicine department were randomized into an observation group and a control group, 3 classes in each one. In the observation group, the collaborative teaching method was adopted, in which, different teaching modes were used according to the characteristics of each chapter and the study initiative of students was predominated. In the control group, the traditional teaching method was used, in which, the class teaching was the primary and the practice was the secondary in the section of techniques of acupuncture and moxibustion. The results of each curriculum and the total results were compared between the two groups during the whole semester. Compared with the control group, in the observation group, the total achievements of curriculum and case analysis combined with the total result of the theory examination were apparently improved (both P <0.01). The collaborative teaching method improves the comprehensive ability of students and provides a new approach to the teaching of Acupuncture and Moxibustion .

  10. Smartphone assessment of knee flexion compared to radiographic standards.

    PubMed

    Dietz, Matthew J; Sprando, Daniel; Hanselman, Andrew E; Regier, Michael D; Frye, Benjamin M

    2017-03-01

    Measuring knee range of motion (ROM) is an important assessment for the outcomes of total knee arthroplasty. Recent technological advances have led to the development and use of accelerometer-based smartphone applications to measure knee ROM. The purpose of this study was to develop, standardize, and validate methods of utilizing smartphone accelerometer technology compared to radiographic standards, visual estimation, and goniometric evaluation. Participants used visual estimation, a long-arm goniometer, and a smartphone accelerometer to determine range of motion of a cadaveric lower extremity; these results were compared to radiographs taken at the same angles. The optimal smartphone position was determined to be on top of the leg at the distal femur and proximal tibia location. Between methods, it was found that the smartphone and goniometer were comparably reliable in measuring knee flexion (ICC=0.94; 95% CI: 0.91-0.96). Visual estimation was found to be the least reliable method of measurement. The results suggested that the smartphone accelerometer was non-inferior when compared to the other measurement techniques, demonstrated similar deviations from radiographic standards, and did not appear to be influenced by the person performing the measurements or the girth of the extremity. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging

    PubMed Central

    Gupta, Disha; Hill, N. Jeremy; Adamo, Matthew A.; Ritaccio, Anthony; Schalk, Gerwin

    2014-01-01

    Introduction Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks. Methods We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images. Results Our fiducial-based method agreed with the MRI–CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations. PMID:25379417

  12. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    PubMed

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  13. Design rainfall depth estimation through two regional frequency analysis methods in Hanjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao

    2012-02-01

    Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.

  14. Electromagnetic Vortex-Based Radar Imaging Using a Single Receiving Antenna: Theory and Experimental Results

    PubMed Central

    Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang

    2017-01-01

    Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487

  15. A super-resolution ultrasound method for brain vascular mapping

    PubMed Central

    O'Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    Purpose: High-resolution vascular imaging has not been achieved in the brain due to limitations of current clinical imaging modalities. The authors present a method for transcranial ultrasound imaging of single micrometer-size bubbles within a tube phantom. Methods: Emissions from single bubbles within a tube phantom were mapped through an ex vivo human skull using a sparse hemispherical receiver array and a passive beamforming algorithm. Noninvasive phase and amplitude correction techniques were applied to compensate for the aberrating effects of the skull bone. The positions of the individual bubbles were estimated beyond the diffraction limit of ultrasound to produce a super-resolution image of the tube phantom, which was compared with microcomputed tomography (micro-CT). Results: The resulting super-resolution ultrasound image is comparable to results obtained via the micro-CT for small tissue specimen imaging. Conclusions: This method provides superior resolution to deep-tissue contrast ultrasound and has the potential to be extended to provide complete vascular network imaging in the brain. PMID:24320408

  16. New method of scoliosis assessment: preliminary results using computerized photogrammetry.

    PubMed

    Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio

    2011-09-01

    A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.

  17. Comparison of the Cellient(™) automated cell block system and agar cell block method.

    PubMed

    Kruger, A M; Stevens, M W; Kerley, K J; Carter, C D

    2014-12-01

    To compare the Cellient(TM) automated cell block system with the agar cell block method in terms of quantity and quality of diagnostic material and morphological, histochemical and immunocytochemical features. Cell blocks were prepared from 100 effusion samples using the agar method and Cellient system, and routinely sectioned and stained for haematoxylin and eosin and periodic acid-Schiff with diastase (PASD). A preliminary immunocytochemical study was performed on selected cases (27/100 cases). Sections were evaluated using a three-point grading system to compare a set of morphological parameters. Statistical analysis was performed using Fisher's exact test. Parameters assessing cellularity, presence of single cells and definition of nuclear membrane, nucleoli, chromatin and cytoplasm showed a statistically significant improvement on Cellient cell blocks compared with agar cell blocks (P < 0.05). No significant difference was seen for definition of cell groups, PASD staining or the intensity or clarity of immunocytochemical staining. A discrepant immunocytochemistry (ICC) result was seen in 21% (13/63) of immunostains. The Cellient technique is comparable with the agar method, with statistically significant results achieved for important morphological features. It demonstrates potential as an alternative cell block preparation method which is relevant for the rapid processing of fine needle aspiration samples, malignant effusions and low-cellularity specimens, where optimal cell morphology and architecture are essential. Further investigation is required to optimize immunocytochemical staining using the Cellient method. © 2014 John Wiley & Sons Ltd.

  18. spa Typing and Multilocus Sequence Typing Show Comparable Performance in a Macroepidemiologic Study of Staphylococcus aureus in the United States

    PubMed Central

    O'Hara, F. Patrick; Suaya, Jose A.; Ray, G. Thomas; Baxter, Roger; Brown, Megan L.; Mera, Robertino M.; Close, Nicole M.; Thomas, Elizabeth

    2016-01-01

    A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants. PMID:26669861

  19. spa Typing and Multilocus Sequence Typing Show Comparable Performance in a Macroepidemiologic Study of Staphylococcus aureus in the United States.

    PubMed

    O'Hara, F Patrick; Suaya, Jose A; Ray, G Thomas; Baxter, Roger; Brown, Megan L; Mera, Robertino M; Close, Nicole M; Thomas, Elizabeth; Amrine-Madsen, Heather

    2016-01-01

    A number of molecular typing methods have been developed for characterization of Staphylococcus aureus isolates. The utility of these systems depends on the nature of the investigation for which they are used. We compared two commonly used methods of molecular typing, multilocus sequence typing (MLST) (and its clustering algorithm, Based Upon Related Sequence Type [BURST]) with the staphylococcal protein A (spa) typing (and its clustering algorithm, Based Upon Repeat Pattern [BURP]), to assess the utility of these methods for macroepidemiology and evolutionary studies of S. aureus in the United States. We typed a total of 366 clinical isolates of S. aureus by these methods and evaluated indices of diversity and concordance values. Our results show that, when combined with the BURP clustering algorithm to delineate clonal lineages, spa typing produces results that are highly comparable with those produced by MLST/BURST. Therefore, spa typing is appropriate for use in macroepidemiology and evolutionary studies and, given its lower implementation cost, this method appears to be more efficient. The findings are robust and are consistent across different settings, patient ages, and specimen sources. Our results also support a model in which the methicillin-resistant S. aureus (MRSA) population in the United States comprises two major lineages (USA300 and USA100), which each consist of closely related variants.

  20. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    PubMed Central

    Pawluski, Jodi L.; van Donkelaar, Eva; Abrams, Zipporah; Steinbusch, Harry W. M.; Charlier, Thierry D.

    2014-01-01

    Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1) cookie and (2) osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL) at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat. PMID:24757568

  1. A comparative study of amplitude calibrations for the East Asia VLBI Network: A priori and template spectrum methods

    NASA Astrophysics Data System (ADS)

    Cho, Ilje; Jung, Taehyun; Zhao, Guang-Yao; Akiyama, Kazunori; Sawada-Satoh, Satoko; Kino, Motoki; Byun, Do-Young; Sohn, Bong Won; Shibata, Katsunori M.; Hirota, Tomoya; Niinuma, Kotaro; Yonekura, Yoshinori; Fujisawa, Kenta; Oyama, Tomoaki

    2017-12-01

    We present the results of a comparative study of amplitude calibrations for the East Asia VLBI Network (EAVN) at 22 and 43 GHz using two different methods of an "a priori" and a "template spectrum", particularly on lower declination sources. Using observational data sets of early EAVN observations, we investigated the elevation-dependence of the gain values at seven stations of the KaVA (KVN and VERA Array) and three additional telescopes in Japan (Takahagi 32 m, Yamaguchi 32 m, and Nobeyama 45 m). By comparing the independently obtained gain values based on these two methods, we found that the gain values from each method were consistent within 10% at elevations higher than 10°. We also found that the total flux densities of two images produced from the different amplitude calibrations were in agreement within 10% at both 22 and 43 GHz. By using the template spectrum method, furthermore, the additional radio telescopes can participate in KaVA (i.e., EAVN), giving a notable sensitivity increase. Therefore, our results will constrain the detailed conditions in order to measure the VLBI amplitude reliably using EAVN, and discuss the potential of possible expansion to telescopes comprising EAVN.

  2. [Study on trace elements of lake sediments by ICP-AES and XRF core scanning].

    PubMed

    Cheng, Ai-Ying; Yu, Jun-Qing; Gao, Chun-Liang; Zhang, Li-Sha; He, Xian-Hu

    2013-07-01

    It is the first time to study sediment of Toson lake in Qaidam Basin. Trace elements including Cd, Cr, Cu, Zn and Pb in lake sediment were measured by ICP-AES method, studied and optimized from different resolution methods respectively, and finally determined a optimum pretreatment system for sediment of Toson lake, namely, HCl-HNO3-HF-HClO4-H2O2 system in the proportions of 5 : 5 : 5 : 1 : 1 was determined. At the same time, the data measured by XRF core scanning were compared, the use of moisture content correction method was analyzed, and the influence of the moisture content on the scanning method was discussed. The results showed that, compared to the background value, the contents of Cd and Zn were a little higher, the content of Cr, Cu and Pb was within the background value limits. XRF core scanning was controlled by sediment elements as well as water content in sediment to some extent. The results by the two methods showed a significant positive correlation, with the correlation coefficient up to 0.673-0.925, and they have a great comparability.

  3. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  4. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning.

    PubMed

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-03-21

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  5. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  6. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  7. Paired Pulse Basis Functions for the Method of Moments EFIE Solution of Electromagnetic Problems Involving Arbitrarily-shaped, Three-dimensional Dielectric Scatterers

    NASA Technical Reports Server (NTRS)

    MacKenzie, Anne I.; Rao, Sadasiva M.; Baginski, Michael E.

    2007-01-01

    A pair of basis functions is presented for the surface integral, method of moment solution of scattering by arbitrarily-shaped, three-dimensional dielectric bodies. Equivalent surface currents are represented by orthogonal unit pulse vectors in conjunction with triangular patch modeling. The electric field integral equation is employed with closed geometries for dielectric bodies; the method may also be applied to conductors. Radar cross section results are shown for dielectric bodies having canonical spherical, cylindrical, and cubic shapes. Pulse basis function results are compared to results by other methods.

  8. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  9. A Spiking Neural Network Methodology and System for Learning and Comparative Analysis of EEG Data From Healthy Versus Addiction Treated Versus Addiction Not Treated Subjects.

    PubMed

    Doborjeh, Maryam Gholami; Wang, Grace Y; Kasabov, Nikola K; Kydd, Robert; Russell, Bruce

    2016-09-01

    This paper introduces a method utilizing spiking neural networks (SNN) for learning, classification, and comparative analysis of brain data. As a case study, the method was applied to electroencephalography (EEG) data collected during a GO/NOGO cognitive task performed by untreated opiate addicts, those undergoing methadone maintenance treatment (MMT) for opiate dependence and a healthy control group. the method is based on an SNN architecture called NeuCube, trained on spatiotemporal EEG data. NeuCube was used to classify EEG data across subject groups and across GO versus NOGO trials, but also facilitated a deeper comparative analysis of the dynamic brain processes. This analysis results in a better understanding of human brain functioning across subject groups when performing a cognitive task. In terms of the EEG data classification, a NeuCube model obtained better results (the maximum obtained accuracy: 90.91%) when compared with traditional statistical and artificial intelligence methods (the maximum obtained accuracy: 50.55%). more importantly, new information about the effects of MMT on cognitive brain functions is revealed through the analysis of the SNN model connectivity and its dynamics. this paper presented a new method for EEG data modeling and revealed new knowledge on brain functions associated with mental activity which is different from the brain activity observed in a resting state of the same subjects.

  10. Degradation of learned skills. Static practice effectiveness for visual approach and landing skill retention

    NASA Technical Reports Server (NTRS)

    Sitterley, T. E.

    1974-01-01

    The effectivess of an improved static retraining method was evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Experienced pilots were trained and then tested after 4 months without flying to compare their performance using the improved method with three methods previously evaluated. Use of the improved static retraining method resulted in no practical or significant skill degradation and was found to be even more effective than methods using a dynamic presentation of visual cues. The results suggested that properly structured open loop methods of flight control task retraining are feasible.

  11. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  12. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  13. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  14. Comparative evaluation of endodontic pressure syringe, insulin syringe, jiffy tube, and local anesthetic syringe in obturation of primary teeth: An in vitro study.

    PubMed

    Hiremath, Mallayya C; Srivastava, Pooja

    2016-01-01

    The purpose of this in vitro study was to compare four methods of root canal obturation in primary teeth using conventional radiography. A total of 96 root canals of primary molars were prepared and obturated with zinc oxide eugenol. Obturation methods compared were endodontic pressure syringe, insulin syringe, jiffy tube, and local anesthetic syringe. The root canal obturations were evaluated by conventional radiography for the length of obturation and presence of voids. The obtained data were analyzed using Chi-square test. The results showed significant differences between the four groups for the length of obturation (P < 0.05). The endodontic pressure syringe showed the best results (98.5% optimal fillings) and jiffy tube showed the poor results (37.5% optimal fillings) for the length of obturation. The insulin syringe (79.2% optimal fillings) and local anesthetic syringe (66.7% optimal fillings) showed acceptable results for the length of root canal obturation. However, minor voids were present in all the four techniques used. Endodontic pressure syringe produced the best results in terms of length of obturation and controlling paste extrusion from the apical foramen. However, insulin syringe and local anesthetic syringe can be used as effective alternative methods.

  15. Quantitative determination of ambroxol in tablets by derivative UV spectrophotometric method and HPLC.

    PubMed

    Dinçer, Zafer; Basan, Hasan; Göger, Nilgün Günden

    2003-04-01

    A derivative UV spectrophotometric method for the determination of ambroxol in tablets was developed. Determination of ambroxol in tablets was conducted by using first-order derivative UV spectrophotometric method at 255 nm (n = 5). Standards for the calibration graph ranging from 5.0 to 35.0 microg/ml were prepared from stock solution. The proposed method was accurate with 98.6+/-0.4% recovery value and precise with coefficient of variation (CV) of 1.22. These results were compared with those obtained by reference methods, zero-order UV spectrophotometric method and reversed-phase high-performance liquid chromatography (HPLC) method. A reversed-phase C(18) column with aqueous phosphate (0.01 M)-acetonitrile-glacial acetic acid (59:40:1, v/v/v) (pH 3.12) mobile phase was used and UV detector was set to 252 nm. Calibration solutions used in HPLC were ranging from 5.0 to 20.0 microg/ml. Results obtained by derivative UV spectrophotometric method was comparable to those obtained by reference methods, zero-order UV spectrophotometric method and HPLC, as far as ANOVA test, F(calculated) = 0.762 and F(theoretical) = 3.89, was concerned. Copyright 2003 Elsevier Science B.V.

  16. Comparison of Dorris-Gray and Schultz methods for the calculation of surface dispersive free energy by inverse gas chromatography.

    PubMed

    Shi, Baoli; Wang, Yue; Jia, Lina

    2011-02-11

    Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.

  17. The anesthesia and brain monitor (ABM). Concept and performance.

    PubMed

    Kay, B

    1984-01-01

    Three integral components of the ABM, the frontalis electromyogram (EMG), the processed unipolar electroencephalogram (EEG) and the neuromuscular transmission monitor (NMT) were compared with standard research methods, and their clinical utility indicated. The EMG was compared with the method of Dundee et al (2) for measuring the induction dose of thiopentone; the EEG was compared with the SLE Galileo E8-b and the NMT was compared with the Medelec MS6. In each case correlation of results was extremely high, and the ABM offered some advantages over the standard research methods. We conclude that each of the integral units of the ABM is simple to apply and interpret, yet as accurate as standard apparatus used for research. In addition the ABM offers excellent display and recording facilities and alarm systems.

  18. Loop-mediated isothermal PCR (LAMP) for the diagnosis of falciparum malaria.

    PubMed

    Paris, Daniel H; Imwong, Mallika; Faiz, Abul M; Hasan, Mahtabuddin; Yunus, Emran Bin; Silamut, Kamolrat; Lee, Sue J; Day, Nicholas P J; Dondorp, Arjen M

    2007-11-01

    A recently described loop-mediated isothermal polymerase chain reaction (LAMP) for molecular detection of Plasmodium falciparum was compared with microscopy, PfHRP2-based rapid diagnostic test (RDT), and nested polymerase chain reaction (PCR) as the "gold standard" in 115 Bangladeshi in-patients with fever. DNA extraction for LAMP was conducted by conventional methods or simple heating of the sample; test results were either assessed visually or by gel electrophoresis. Conventional DNA extraction followed by gel electrophoresis had the highest agreement with the reference method (81.7%, kappa = 0.64), with a sensitivity (95% CI) of 76.1% (68.3-83.9%), comparable to RDT and microscopy, but a specificity of 89.6% (84.0-95.2%) compared with 100% for RDT and microscopy. DNA extraction by heat treatment deteriorated specificity to unacceptable levels. LAMP enables molecular diagnosis of falciparum malaria in settings with limited technical resources but will need further optimization. The results are in contrast with a higher accuracy reported in an earlier study comparing LAMP with a non-validated PCR method.

  19. A method to measure the ozone penetration factor in residences under infiltration conditions: application in a multifamily apartment unit.

    PubMed

    Zhao, H; Stephens, B

    2016-08-01

    Recent experiments have demonstrated that outdoor ozone reacts with materials inside residential building enclosures, potentially reducing indoor exposures to ozone or altering ozone reaction byproducts. However, test methods to measure ozone penetration factors in residences (P) remain limited. We developed a method to measure ozone penetration factors in residences under infiltration conditions and applied it in an unoccupied apartment unit. Twenty-four repeated measurements were made, and results were explored to (i) evaluate the accuracy and repeatability of the new procedure using multiple solution methods, (ii) compare results from 'interference-free' and conventional UV absorbance ozone monitors, and (iii) compare results against those from a previously published test method requiring artificial depressurization. The mean (±s.d.) estimate of P was 0.54 ± 0.10 across a wide range of conditions using the new method with an interference-free monitor; the conventional monitor was unable to yield meaningful results due to relatively high limits of detection. Estimates of P were not clearly influenced by any indoor or outdoor environmental conditions or changes in indoor decay rate constants. This work represents the first known measurements of ozone penetration factors in a residential building operating under natural infiltration conditions and provides a new method for widespread application in buildings. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Validation of a combi oven cooking method for preparation of chicken breast meat for quality assessment.

    PubMed

    Zhuang, H; Savage, E M

    2008-10-01

    Quality assessment results of cooked meat can be significantly affected by sample preparation with different cooking techniques. A combi oven is a relatively new cooking technique in the U.S. market. However, there was a lack of published data about its effect on quality measurements of chicken meat. Broiler breast fillets deboned at 24-h postmortem were cooked with one of the 3 methods to the core temperature of 80 degrees C. Cooking methods were evaluated based on cooking operation requirements, sensory profiles, Warner-Bratzler (WB) shear and cooking loss. Our results show that the average cooking time for the combi oven was 17 min compared with 31 min for the commercial oven method and 16 min for the hot water method. The combi oven did not result in a significant difference in the WB shear force values, although the cooking loss of the combi oven samples was significantly lower than the commercial oven and hot water samples. Sensory profiles of the combi oven samples did not significantly differ from those of the commercial oven and hot water samples. These results demonstrate that combi oven cooking did not significantly affect sensory profiles and WB shear force measurements of chicken breast muscle compared to the other 2 cooking methods. The combi oven method appears to be an acceptable alternative for preparing chicken breast fillets in a quality assessment.

  1. A Comparative Study on the Architecture Internet of Things and its’ Implementation method

    NASA Astrophysics Data System (ADS)

    Xiao, Zhiliang

    2017-08-01

    With the rapid development of science and technology, Internet-based the Internet of things was born and achieved good results. In order to further build a complete Internet of things system, to achieve the design of the Internet of things, we need to constitute the object of the network structure of the indicators of comparative study, and on this basis, the Internet of things connected to the way and do more in-depth to achieve the unity of the object network architecture and implementation methods. This paper mainly analyzes the two types of Internet of Things system, and makes a brief comparative study of the important indicators, and then introduces the connection method and realization method of Internet of Things based on the concept of Internet of Things and architecture.

  2. Effective description of a 3D object for photon transportation in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Suganuma, R.; Ogawa, K.

    2000-06-01

    Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.

  3. Comparative analysis of expert and machine-learning methods for classification of body cavity effusions in companion animals.

    PubMed

    Hotz, Christine S; Templeton, Steven J; Christopher, Mary M

    2005-03-01

    A rule-based expert system using CLIPS programming language was created to classify body cavity effusions as transudates, modified transudates, exudates, chylous, and hemorrhagic effusions. The diagnostic accuracy of the rule-based system was compared with that produced by 2 machine-learning methods: Rosetta, a rough sets algorithm and RIPPER, a rule-induction method. Results of 508 body cavity fluid analyses (canine, feline, equine) obtained from the University of California-Davis Veterinary Medical Teaching Hospital computerized patient database were used to test CLIPS and to test and train RIPPER and Rosetta. The CLIPS system, using 17 rules, achieved an accuracy of 93.5% compared with pathologist consensus diagnoses. Rosetta accurately classified 91% of effusions by using 5,479 rules. RIPPER achieved the greatest accuracy (95.5%) using only 10 rules. When the original rules of the CLIPS application were replaced with those of RIPPER, the accuracy rates were identical. These results suggest that both rule-based expert systems and machine-learning methods hold promise for the preliminary classification of body fluids in the clinical laboratory.

  4. Comparison of Submental Blood Collection with the Retroorbital and Submandibular Methods in Mice (Mus musculus)

    PubMed Central

    Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L

    2016-01-01

    Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712

  5. Comparative analysis of three different methods for monitoring the use of green bridges by wildlife.

    PubMed

    Gužvica, Goran; Bošnjak, Ivana; Bielen, Ana; Babić, Danijel; Radanović-Gužvica, Biserka; Šver, Lidija

    2014-01-01

    Green bridges are used to decrease highly negative impact of roads/highways on wildlife populations and their effectiveness is evaluated by various monitoring methods. Based on the 3-year monitoring of four Croatian green bridges, we compared the effectiveness of three indirect monitoring methods: track-pads, camera traps and active infrared (IR) trail monitoring system. The ability of the methods to detect different species and to give good estimation of number of animal crossings was analyzed. The accuracy of species detection by track-pad method was influenced by granulometric composition of track-pad material, with the best results obtained with higher percentage of silt and clay. We compared the species composition determined by track-pad and camera trap methods and found that monitoring by tracks underestimated the ratio of small canids, while camera traps underestimated the ratio of roe deer. Regarding total number of recorder events, active IR detectors recorded from 11 to 19 times more events then camera traps and app. 80% of them were not caused by animal crossings. Camera trap method underestimated the real number of total events. Therefore, an algorithm for filtration of the IR dataset was developed for approximation of the real number of crossings. Presented results are valuable for future monitoring of wildlife crossings in Croatia and elsewhere, since advantages and disadvantages of used monitoring methods are shown. In conclusion, different methods should be chosen/combined depending on the aims of the particular monitoring study.

  6. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections.

    PubMed

    Stacul, Stefano; Squeglia, Nunziante

    2018-02-15

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.

  7. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  8. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections

    PubMed Central

    2018-01-01

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ. PMID:29462857

  9. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  10. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  11. Comment on: "Split kinetic energy method for quantum systems with competing potentials", Ann. Phys. 327 (2012) 2061

    NASA Astrophysics Data System (ADS)

    Fernández, Francisco M.

    2018-06-01

    We show that the kinetic-energy partition method (KEP) is a particular example of the well known Rayleigh-Ritz variational method. We discuss some of the KEP results and compare them with those coming from other approaches.

  12. Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.

    ERIC Educational Resources Information Center

    Jackson, David P.

    1996-01-01

    Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…

  13. Comparative studies of copy number variation detection methods for next-generation sequencing technologies.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-01-01

    Copy number variation (CNV) has played an important role in studies of susceptibility or resistance to complex diseases. Traditional methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution of genomic regions. Following the emergence of next generation sequencing (NGS) technologies, CNV detection methods based on the short read data have recently been developed. However, due to the relatively young age of the procedures, their performance is not fully understood. To help investigators choose suitable methods to detect CNVs, comparative studies are needed. We compared six publicly available CNV detection methods: CNV-seq, FREEC, readDepth, CNVnator, SegSeq and event-wise testing (EWT). They are evaluated both on simulated and real data with different experiment settings. The receiver operating characteristic (ROC) curve is employed to demonstrate the detection performance in terms of sensitivity and specificity, box plot is employed to compare their performances in terms of breakpoint and copy number estimation, Venn diagram is employed to show the consistency among these methods, and F-score is employed to show the overlapping quality of detected CNVs. The computational demands are also studied. The results of our work provide a comprehensive evaluation on the performances of the selected CNV detection methods, which will help biological investigators choose the best possible method.

  14. Comparison of accuracies of an intraoral spectrophotometer and conventional visual method for shade matching using two shade guide systems

    PubMed Central

    Parameswaran, Vidhya; Anilkumar, S.; Lylajam, S.; Rajesh, C.; Narayan, Vivek

    2016-01-01

    Background and Objectives: This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. Methods: In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results: Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. Conclusion: This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome. PMID:27746599

  15. Role of endocortical contouring methods on precision of HR-pQCT-derived cortical micro-architecture in postmenopausal women and young adults.

    PubMed

    Kawalilak, C E; Johnston, J D; Cooper, D M L; Olszynski, W P; Kontulainen, S A

    2016-02-01

    Precision errors of cortical bone micro-architecture from high-resolution peripheral quantitative computed tomography (pQCT) ranged from 1 to 16 % and did not differ between automatic or manually modified endocortical contour methods in postmenopausal women or young adults. In postmenopausal women, manually modified contours led to generally higher cortical bone properties when compared to the automated method. First, the objective of the study was to define in vivo precision errors (coefficient of variation root mean square (CV%RMS)) and least significant change (LSC) for cortical bone micro-architecture using two endocortical contouring methods: automatic (AUTO) and manually modified (MOD) in two groups (postmenopausal women and young adults) from high-resolution pQCT (HR-pQCT) scans. Second, it was to compare precision errors and bone outcomes obtained with both methods within and between groups. Using HR-pQCT, we scanned twice the distal radius and tibia of 34 postmenopausal women (mean age ± SD 74 ± 7 years) and 30 young adults (27 ± 9 years). Cortical micro-architecture was determined using AUTO and MOD contour methods. CV%RMS and LSC were calculated. Repeated measures and multivariate ANOVA were used to compare mean CV% and bone outcomes between the methods within and between the groups. Significance was accepted at P < 0.05. CV%RMS ranged from 0.9 to 16.3 %. Within-group precision did not differ between evaluation methods. Compared to young adults, postmenopausal women had better precision for radial cortical porosity (precision difference 9.3 %) and pore volume (7.5 %) with MOD. Young adults had better precision for cortical thickness (0.8 %, MOD) and tibial cortical density (0.2 %, AUTO). In postmenopausal women, MOD resulted in 0.2-54 % higher values for most cortical outcomes, as well as 6-8 % lower radial and tibial cortical BMD and 2 % lower tibial cortical thickness. Results suggest that AUTO and MOD endocortical contour methods provide comparable repeatability. In postmenopausal women, manual modification of endocortical contours led to generally higher cortical bone properties when compared to the automated method, while no between-method differences were observed in young adults.

  16. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Nonequilibrium radiative heating prediction method for aeroassist flowfields with coupling to flowfield solvers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hartung, Lin C.

    1991-01-01

    A method for predicting radiation adsorption and emission coefficients in thermochemical nonequilibrium flows is developed. The method is called the Langley optimized radiative nonequilibrium code (LORAN). It applies the smeared band approximation for molecular radiation to produce moderately detailed results and is intended to fill the gap between detailed but costly prediction methods and very fast but highly approximate methods. The optimization of the method to provide efficient solutions allowing coupling to flowfield solvers is discussed. Representative results are obtained and compared to previous nonequilibrium radiation methods, as well as to ground- and flight-measured data. Reasonable agreement is found in all cases. A multidimensional radiative transport method is also developed for axisymmetric flows. Its predictions for wall radiative flux are 20 to 25 percent lower than those of the tangent slab transport method, as expected, though additional investigation of the symmetry and outflow boundary conditions is indicated. The method was applied to the peak heating condition of the aeroassist flight experiment (AFE) trajectory, with results comparable to predictions from other methods. The LORAN method was also applied in conjunction with the computational fluid dynamics (CFD) code LAURA to study the sensitivity of the radiative heating prediction to various models used in nonequilibrium CFD. This study suggests that radiation measurements can provide diagnostic information about the detailed processes occurring in a nonequilibrium flowfield because radiation phenomena are very sensitive to these processes.

  18. Comparison of preprocessing methods and storage times for touch DNA samples

    PubMed Central

    Dong, Hui; Wang, Jing; Zhang, Tao; Ge, Jian-ye; Dong, Ying-qiang; Sun, Qi-fan; Liu, Chao; Li, Cai-xia

    2017-01-01

    Aim To select appropriate preprocessing methods for different substrates by comparing the effects of four different preprocessing methods on touch DNA samples and to determine the effect of various storage times on the results of touch DNA sample analysis. Method Hand touch DNA samples were used to investigate the detection and inspection results of DNA on different substrates. Four preprocessing methods, including the direct cutting method, stubbing procedure, double swab technique, and vacuum cleaner method, were used in this study. DNA was extracted from mock samples with four different preprocessing methods. The best preprocess protocol determined from the study was further used to compare performance after various storage times. DNA extracted from all samples was quantified and amplified using standard procedures. Results The amounts of DNA and the number of alleles detected on the porous substrates were greater than those on the non-porous substrates. The performances of the four preprocessing methods varied with different substrates. The direct cutting method displayed advantages for porous substrates, and the vacuum cleaner method was advantageous for non-porous substrates. No significant degradation trend was observed as the storage times increased. Conclusion Different substrates require the use of different preprocessing method in order to obtain the highest DNA amount and allele number from touch DNA samples. This study provides a theoretical basis for explorations of touch DNA samples and may be used as a reference when dealing with touch DNA samples in case work. PMID:28252870

  19. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    PubMed Central

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Results Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Conclusions Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication. PMID:19212468

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boroun, G. R., E-mail: grboroun@gmail.com, E-mail: boroun@razi.ac.ir; Zarrin, S.; Dadfar, S.

    We evaluate the non-singlet spin-dependent structure function g{sub 1}{sup NS} at leading order (LO) and next-to-leading order (NLO) by using the Laplace-transform technique and method of characteristics and also obtain its first moment at NLO. The polarized non-singlet structure function results are compared with the data from HERMES (A. Airapetian et al., Phys. Rev. D 75, 012007 (2007)) and E143 (K. Abe et al. (E143 Collab.), Phys. Rev. D 58, 112003 (1998)) at LO and NLO analyses and the first-moment the result at NLO is compared with the result of the NLO GRSV2000 fit. Considering the solution, this method ismore » valid at low- and large-x regions.« less

  1. Removal of infused water predominantly during insertion (water exchange) is consistently associated with a greater reduction of pain score - review of randomized controlled trials (RCTs) of water method colonoscopy

    PubMed Central

    Harker, JO; Leung, JW; Siao-Salera, RM; Mann, SK; Ramirez, FC; Friedland, S; Amato, A; Radaelli, F; Paggi, S; Terruzzi, V; Hsieh, YH

    2011-01-01

    Introduction Variation in the outcomes in RcTs comparing water-related methods and air insufflation during the insertion phase of colonoscopy raises challenging questions regarding the approach. This report reviews the impact of water exchange on the variation in attenuation of pain during colonoscopy by water-related methods. Methods Medline (2008 to 2011) searches, abstracts of the 2011 Digestive Disease Week (DDW) and personal communications were considered to identify RcTs that compared water-related methods and air insufflation to aid insertion of the colonoscope. Results: Since 2008 nine published and one submitted RcTs and five abstracts of RcTs presented at the 2011 DDW have been identified. Thirteen RcTs (nine published, one submitted and one abstract, n=1850) described reduction of pain score during or after colonoscopy (eleven reported statistical significance); the remaining reports described lower doses of medication used, or lower proportion of patients experiencing severe pain in colonoscopy performed with water-related methods compared with air insufflation (Tables 1 and 2). The water-related methods notably differ in the timing of removal of the infused water - predominantly during insertion (water exchange) versus predominantly during withdrawal (water immersion). Use of water exchange was consistently associated with a greater attenuation of pain score in patients who did not receive full sedation (Table 3). Conclusion The comparative data reveal that a greater attenuation of pain was associated with water exchange than water immersion during insertion. The intriguing results should be subjected to further evaluation by additional RcTs to elucidate the mechanism of the pain-alleviating impact of the water method. PMID:22163081

  2. Comparison of ASE and SFE with Soxhlet, Sonication, and Methanolic Saponification Extractions for the Determination of Organic Micropollutants in Marine Particulate Matter.

    PubMed

    Heemken, O P; Theobald, N; Wenclawiak, B W

    1997-06-01

    The methods of accelerated solvent extraction (ASE) and supercritical fluid extraction (SFE) of polycyclic aromatic hydrocarbons (PAHs), aliphatic hydrocarbons, and chlorinated hydrocarbons from marine samples were investigated. The results of extractions of a certified sediment and four samples of suspended particulate matter (SPM) were compared to classical Soxhlet (SOX), ultrasonication (USE), and methanolic saponification extraction (MSE) methods. The recovery data, including precision and systematic deviations of each method, were evaluated statistically. It was found that recoveries and precision of ASE and SFE compared well with the other methods investigated. Using SFE, the average recoveries of PAHs in three different samples ranged from 96 to 105%, for ASE the recoveries were in the range of 97-108% compared to the reference methods. Compared to the certified values of sediment HS-6, the average recoveries of SFE and ASE were 87 and 88%, most compounds being within the limits of confidence. Also, for alkanes the average recoveries by SFE and ASE were equal to the results obtained by SOX, USE, and MSE. In the case of SFE, the recoveries were in the range 93-115%, and ASE achieved recoveries of 94-107% as compared to the other methods. For ASE and SFE, the influence of water on the extraction efficiency was examined. While the natural water content of the SPM sample (56 wt %) led to insufficient recoveries in ASE and SFE, quantitative extractions were achieved in SFE after addition of anhydrous sodium sulfate to the sample. Finally, ASE was applied to SPM-loaded filter candles whereby a mixture of n-hexane/acetone as extraction solvent allowed the simultaneous determination of PAHs, alkanes, and chlorinated hydrocarbons.

  3. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  4. The Importance of Comparative Law in Legal Education: United States Goals and Methods of Legal Comparison

    ERIC Educational Resources Information Center

    Ault, Hugh J.; Glendon, Mary Ann

    1976-01-01

    Discusses the rationale for teaching comparative law and describes techniques and results of experiments with two kinds of courses at Boston College Law School: (1) Comparative Legal Analysis, a perspective course, and (2) integration of comparative law as another dimension into courses in a particular subject matter area. (JT)

  5. Network Flow Simulation of Fluid Transients in Rocket Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Bandyopadhyay, Alak; Hamill, Brian; Ramachandran, Narayanan; Majumdar, Alok

    2011-01-01

    Fluid transients, also known as water hammer, can have a significant impact on the design and operation of both spacecraft and launch vehicle propulsion systems. These transients often occur at system activation and shutdown. The pressure rise due to sudden opening and closing of valves of propulsion feed lines can cause serious damage during activation and shutdown of propulsion systems. During activation (valve opening) and shutdown (valve closing), pressure surges must be predicted accurately to ensure structural integrity of the propulsion system fluid network. In the current work, a network flow simulation software (Generalized Fluid System Simulation Program) based on Finite Volume Method has been used to predict the pressure surges in the feed line due to both valve closing and valve opening using two separate geometrical configurations. The valve opening pressure surge results are compared with experimental data available in the literature and the numerical results compared very well within reasonable accuracy (< 5%) for a wide range of inlet-to-initial pressure ratios. A Fast Fourier Transform is preformed on the pressure oscillations to predict the various modal frequencies of the pressure wave. The shutdown problem, i.e. valve closing problem, the simulation results are compared with the results of Method of Characteristics. Most rocket engines experience a longitudinal acceleration, known as "pogo" during the later stage of engine burn. In the shutdown example problem, an accumulator has been used in the feed system to demonstrate the "pogo" mitigation effects in the feed system of propellant. The simulation results using GFSSP compared very well with the results of Method of Characteristics.

  6. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  7. A comparative study of progressive versus successive spectrophotometric resolution techniques applied for pharmaceutical ternary mixtures

    NASA Astrophysics Data System (ADS)

    Saleh, Sarah S.; Lotfy, Hayam M.; Hassan, Nagiba Y.; Salem, Hesham

    2014-11-01

    This work represents a comparative study of a novel progressive spectrophotometric resolution technique namely, amplitude center method (ACM), versus the well-established successive spectrophotometric resolution techniques namely; successive derivative subtraction (SDS); successive derivative of ratio spectra (SDR) and mean centering of ratio spectra (MCR). All the proposed spectrophotometric techniques consist of several consecutive steps utilizing ratio and/or derivative spectra. The novel amplitude center method (ACM) can be used for the determination of ternary mixtures using single divisor where the concentrations of the components are determined through progressive manipulation performed on the same ratio spectrum. Those methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the official BP methods, showing no significant difference with respect to accuracy and precision.

  8. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  9. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  10. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching.

    PubMed

    Austin, Peter C

    2017-02-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term "bias due to incomplete matching" to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used.

  11. Double propensity-score adjustment: A solution to design bias or bias due to incomplete matching

    PubMed Central

    2016-01-01

    Propensity-score matching is frequently used to reduce the effects of confounding when using observational data to estimate the effects of treatments. Matching allows one to estimate the average effect of treatment in the treated. Rosenbaum and Rubin coined the term “bias due to incomplete matching” to describe the bias that can occur when some treated subjects are excluded from the matched sample because no appropriate control subject was available. The presence of incomplete matching raises important questions around the generalizability of estimated treatment effects to the entire population of treated subjects. We describe an analytic solution to address the bias due to incomplete matching. Our method is based on using optimal or nearest neighbor matching, rather than caliper matching (which frequently results in the exclusion of some treated subjects). Within the sample matched on the propensity score, covariate adjustment using the propensity score is then employed to impute missing potential outcomes under lack of treatment for each treated subject. Using Monte Carlo simulations, we found that the proposed method resulted in estimates of treatment effect that were essentially unbiased. This method resulted in decreased bias compared to caliper matching alone and compared to either optimal matching or nearest neighbor matching alone. Caliper matching alone resulted in design bias or bias due to incomplete matching, while optimal matching or nearest neighbor matching alone resulted in bias due to residual confounding. The proposed method also tended to result in estimates with decreased mean squared error compared to when caliper matching was used. PMID:25038071

  12. Comparison of transferrin isoform analysis by capillary electrophoresis and HPLC for screening congenital disorders of glycosylation.

    PubMed

    Dave, Mihika B; Dherai, Alpa J; Udani, Vrajesh P; Hegde, Anaita U; Desai, Neelu A; Ashavaid, Tester F

    2018-01-01

    Transferrin, a major glycoprotein has different isoforms depending on the number of sialic acid residues present on its oligosaccharide chain. Genetic variants of transferrin as well as the primary (CDG) & secondary glycosylation defects lead to an altered transferrin pattern. Isoform analysis methods are based on charge/mass variations. We aimed to compare the performance of commercially available capillary electrophoresis CDT kit for diagnosing congenital disorders of glycosylation with our in-house optimized HPLC method for transferrin isoform analysis. The isoform pattern of 30 healthy controls & 50 CDG-suspected patients was determined by CE using a Carbohydrate-Deficient Transferrin kit. The results were compared with in-house HPLC-based assay for transferrin isoforms. Transferrin isoform pattern for healthy individuals showed a predominant tetrasialo transferrin fraction followed by pentasialo, trisialo, and disialotransferrin. Two of 50 CDG-suspected patients showed the presence of asialylated isoforms. The results were comparable with isoform pattern obtained by HPLC. The commercial controls showed a <20% CV for each isoform. Bland Altman plot showed the difference plot to be within +1.96 with no systemic bias in the test results by HPLC & CE. The CE method is rapid, reproducible and comparable with HPLC and can be used for screening Glycosylation defects. © 2017 Wiley Periodicals, Inc.

  13. The effectiveness of digital microscopy as a teaching tool in medical laboratory science curriculum.

    PubMed

    Castillo, Demetra

    2012-01-01

    A fundamental component to the practice of Medical Laboratory Science (MLS) is the microscope. While traditional microscopy (TM) is gold standard, the high cost of maintenance has led to an increased demand for alternative methods, such as digital microscopy (DM). Slides embedded with blood specimens are converted into a digital form that can be run with computer driven software. The aim of this study was to investigate the effectiveness of digital microscopy as a teaching tool in the field of Medical Laboratory Science. Participants reviewed known study slides using both traditional and digital microscopy methods and were assessed using both methods. Participants were randomly divided into two groups. Group 1 performed TM as the primary method and DM as the alternate. Group 2 performed DM as the primary and TM as the alternate. Participants performed differentials with their primary method, were assessed with both methods, and then performed differentials with their alternate method. A detailed assessment rubric was created to determine the accuracy of student responses through comparison of clinical laboratory and instructor results. Student scores were reflected as a percentage correct from these methods. This assessment was done over two different classes. When comparing results between methods for each, independent of the primary method used, results were not statistically different. However, when comparing methods between groups, Group 1 (n = 11) (TM = 73.79% +/- 9.19, DM = 81.43% +/- 8.30; paired t10 = 0.182, p < 0.001) showed a significant difference from Group 2 (n = 14) (TM = 85.64% +/- 5.30, DM = 85.91% +/- 7.62; paired t13 = 3.647, p = 0.860). In the subsequent class, results between both groups (n = 13, n = 16, respectively) did not show any significant difference between groups (Group 1 TM = 86.38% +/- 8.17, Group 1 DM = 88.69% +/- 3.86; paired t12 = 1.253, p = 0.234; Group 2 TM = 86.75% +/- 5.37, Group 2 DM = 86.25% +/- 7.01, paired t15 = 0.280, p = 0.784). The data suggest that DM is comparable to TM. DM could be used as an enhancement model after foundational information was provided using TM.

  14. GENOME-WIDE COMPARATIVE ANALYSIS OF PHYLOGENETIC TREES: THE PROKARYOTIC FOREST OF LIFE

    PubMed Central

    Puigbò, Pere; Wolf, Yuri I.; Koonin, Eugene V.

    2013-01-01

    Genome-wide comparison of phylogenetic trees is becoming an increasingly common approach in evolutionary genomics, and a variety of approaches for such comparison have been developed. In this article we present several methods for comparative analysis of large numbers of phylogenetic trees. To compare phylogenetic trees taking into account the bootstrap support for each internal branch, the Boot-Split Distance (BSD) method is introduced as an extension of the previously developed Split Distance (SD) method for tree comparison. The BSD method implements the straightforward idea that comparison of phylogenetic trees can be made more robust by treating tree splits differentially depending on the bootstrap support. Approaches are also introduced for detecting tree-like and net-like evolutionary trends in the phylogenetic Forest of Life (FOL), i.e., the entirety of the phylogenetic trees for conserved genes of prokaryotes. The principal method employed for this purpose includes mapping quartets of species onto trees to calculate the support of each quartet topology and so to quantify the tree and net contributions to the distances between species. We describe the applications methods used to analyze the FOL and the results obtained with these methods. These results support the concept of the Tree of Life (TOL) as a central evolutionary trend in the FOL as opposed to the traditional view of the TOL as a ‘species tree’. PMID:22399455

  15. Genome-wide comparative analysis of phylogenetic trees: the prokaryotic forest of life.

    PubMed

    Puigbò, Pere; Wolf, Yuri I; Koonin, Eugene V

    2012-01-01

    Genome-wide comparison of phylogenetic trees is becoming an increasingly common approach in evolutionary genomics, and a variety of approaches for such comparison have been developed. In this article, we present several methods for comparative analysis of large numbers of phylogenetic trees. To compare phylogenetic trees taking into account the bootstrap support for each internal branch, the Boot-Split Distance (BSD) method is introduced as an extension of the previously developed Split Distance method for tree comparison. The BSD method implements the straightforward idea that comparison of phylogenetic trees can be made more robust by treating tree splits differentially depending on the bootstrap support. Approaches are also introduced for detecting tree-like and net-like evolutionary trends in the phylogenetic Forest of Life (FOL), i.e., the entirety of the phylogenetic trees for conserved genes of prokaryotes. The principal method employed for this purpose includes mapping quartets of species onto trees to calculate the support of each quartet topology and so to quantify the tree and net contributions to the distances between species. We describe the application of these methods to analyze the FOL and the results obtained with these methods. These results support the concept of the Tree of Life (TOL) as a central evolutionary trend in the FOL as opposed to the traditional view of the TOL as a "species tree."

  16. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  17. Fuzzy decision-making framework for treatment selection based on the combined QUALIFLEX-TODIM method

    NASA Astrophysics Data System (ADS)

    Ji, Pu; Zhang, Hong-yu; Wang, Jian-qiang

    2017-10-01

    Treatment selection is a multi-criteria decision-making problem of significant concern in the medical field. In this study, a fuzzy decision-making framework is established for treatment selection. The framework mitigates information loss by introducing single-valued trapezoidal neutrosophic numbers to denote evaluation information. Treatment selection has multiple criteria that remarkably exceed the alternatives. In consideration of this characteristic, the framework utilises the idea of the qualitative flexible multiple criteria method. Furthermore, it considers the risk-averse behaviour of a decision maker by employing a concordance index based on TODIM (an acronym in Portuguese of interactive and multi-criteria decision-making) method. A sensitivity analysis is performed to illustrate the robustness of the framework. Finally, a comparative analysis is conducted to compare the framework with several extant methods. Results indicate the advantages of the framework and its better performance compared with the extant methods.

  18. Cross-linked polyvinyl alcohol films as alkaline battery separators

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.

    1983-01-01

    Cross-linking methods have been investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. Then pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide-zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.

  19. Cross-linked polyvinyl alcohol films as alkaline battery separators

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.

    1982-01-01

    Cross-linking methods were investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. The pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide - zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.

  20. Technical note: comparison of 3 methods for analyzing areas under the curve for glucose and nonesterified fatty acids concentrations following epinephrine challenge in dairy cows.

    PubMed

    Cardoso, F C; Sears, W; LeBlanc, S J; Drackley, J K

    2011-12-01

    The objective of the study was to compare 3 methods for calculating the area under the curve (AUC) for plasma glucose and nonesterified fatty acids (NEFA) after an intravenous epinephrine (EPI) challenge in dairy cows. Cows were assigned to 1 of 6 dietary niacin treatments in a completely randomized 6 × 6 Latin square with an extra period to measure carryover effects. Periods consisted of a 7-d (d 1 to 7) adaptation period followed by a 7-d (d 8 to 14) measurement period. On d 12, cows received an i.v. infusion of EPI (1.4 μg/kg of BW). Blood was sampled at -45, -30, -20, -10, and -5 min before EPI infusion and 2.5, 5, 10, 15, 20, 30, 45, 60, 90, and 120 min after. The AUC was calculated by incremental area, positive incremental area, and total area using the trapezoidal rule. The 3 methods resulted in different statistical inferences. When comparing the 3 methods for NEFA and glucose response, no significant differences among treatments and no interactions between treatment and AUC method were observed. For glucose and NEFA response, the method was statistically significant. Our results suggest that the positive incremental method and the total area method gave similar results and interpretation but differed from the incremental area method. Furthermore, the 3 methods evaluated can lead to different results and statistical inferences for glucose and NEFA AUC after an EPI challenge. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  2. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  3. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  4. OTG-snpcaller: An Optimized Pipeline Based on TMAP and GATK for SNP Calling from Ion Torrent Data

    PubMed Central

    Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y. Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology’s Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences. PMID:24824529

  5. OTG-snpcaller: an optimized pipeline based on TMAP and GATK for SNP calling from ion torrent data.

    PubMed

    Zhu, Pengyuan; He, Lingyu; Li, Yaqiao; Huang, Wenpan; Xi, Feng; Lin, Lin; Zhi, Qihuan; Zhang, Wenwei; Tang, Y Tom; Geng, Chunyu; Lu, Zhiyuan; Xu, Xun

    2014-01-01

    Because the new Proton platform from Life Technologies produced markedly different data from those of the Illumina platform, the conventional Illumina data analysis pipeline could not be used directly. We developed an optimized SNP calling method using TMAP and GATK (OTG-snpcaller). This method combined our own optimized processes, Remove Duplicates According to AS Tag (RDAST) and Alignment Optimize Structure (AOS), together with TMAP and GATK, to call SNPs from Proton data. We sequenced four sets of exomes captured by Agilent SureSelect and NimbleGen SeqCap EZ Kit, using Life Technology's Ion Proton sequencer. Then we applied OTG-snpcaller and compared our results with the results from Torrent Variants Caller. The results indicated that OTG-snpcaller can reduce both false positive and false negative rates. Moreover, we compared our results with Illumina results generated by GATK best practices, and we found that the results of these two platforms were comparable. The good performance in variant calling using GATK best practices can be primarily attributed to the high quality of the Illumina sequences.

  6. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    NASA Astrophysics Data System (ADS)

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-12-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34-80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics.

  7. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  8. Lipid Adjustment for Chemical Exposures: Accounting for Concomitant Variables

    PubMed Central

    Li, Daniel; Longnecker, Matthew P.; Dunson, David B.

    2013-01-01

    Background Some environmental chemical exposures are lipophilic and need to be adjusted by serum lipid levels before data analyses. There are currently various strategies that attempt to account for this problem, but all have their drawbacks. To address such concerns, we propose a new method that uses Box-Cox transformations and a simple Bayesian hierarchical model to adjust for lipophilic chemical exposures. Methods We compared our Box-Cox method to existing methods. We ran simulation studies in which increasing levels of lipid-adjusted chemical exposure did and did not increase the odds of having a disease, and we looked at both single-exposure and multiple-exposures cases. We also analyzed an epidemiology dataset that examined the effects of various chemical exposures on the risk of birth defects. Results Compared with existing methods, our Box-Cox method produced unbiased estimates, good coverage, similar power, and lower type-I error rates. This was the case in both single- and multiple-exposure simulation studies. Results from analysis of the birth-defect data differed from results using existing methods. Conclusion Our Box-Cox method is a novel and intuitive way to account for the lipophilic nature of certain chemical exposures. It addresses some of the problems with existing methods, is easily extendable to multiple exposures, and can be used in any analyses that involve concomitant variables. PMID:24051893

  9. Systematic Assessment of Seven Solvent and Solid-Phase Extraction Methods for Metabolomics Analysis of Human Plasma by LC-MS

    PubMed Central

    Sitnikov, Dmitri G.; Monnin, Cian S.; Vuckovic, Dajana

    2016-01-01

    The comparison of extraction methods for global metabolomics is usually executed in biofluids only and focuses on metabolite coverage and method repeatability. This limits our detailed understanding of extraction parameters such as recovery and matrix effects and prevents side-by-side comparison of different sample preparation strategies. To address this gap in knowledge, seven solvent-based and solid-phase extraction methods were systematically evaluated using standard analytes spiked into both buffer and human plasma. We compared recovery, coverage, repeatability, matrix effects, selectivity and orthogonality of all methods tested for non-lipid metabolome in combination with reversed-phased and mixed-mode liquid chromatography mass spectrometry analysis (LC-MS). Our results confirmed wide selectivity and excellent precision of solvent precipitations, but revealed their high susceptibility to matrix effects. The use of all seven methods showed high overlap and redundancy which resulted in metabolite coverage increases of 34–80% depending on LC-MS method employed as compared to the best single extraction protocol (methanol/ethanol precipitation) despite 7x increase in MS analysis time and sample consumption. The most orthogonal methods to methanol-based precipitation were ion-exchange solid-phase extraction and liquid-liquid extraction using methyl-tertbutyl ether. Our results help facilitate rational design and selection of sample preparation methods and internal standards for global metabolomics. PMID:28000704

  10. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  11. A study of solid wall models for weakly compressible SPH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valizadeh, Alireza, E-mail: alireza.valizadeh@monash.edu; Monaghan, Joseph J., E-mail: joe.monaghan@monash.edu

    2015-11-01

    This paper is concerned with a comparison of two methods of treating solid wall boundaries in the weakly compressible (SPH) method. They have been chosen because of their wide use in simulations. These methods are the boundary force particles of Monaghan and Kajtar [24] and the use of layers of fixed boundary particles. The latter was first introduced by Morris et al. [26] but has since been improved by Adami et al. [1] whose algorithm involves interpolating the pressure and velocity from the actual fluid to the boundary particles. For each method, we study the effect of the density diffusivemore » terms proposed by Molteni and Colagrossi [19] and modified by Antuono et al. [3]. We test the methods by a series of simulations commencing with the time-dependent spin-down of fluid within a cylinder and the behaviour of fluid in a box subjected to constant acceleration at an angle to the walls of the box, and concluding with a dam break over a triangular obstacle. In the first two cases the results from the two methods can be compared to analytical solutions while, in the latter case, they can be compared with experiments and other methods. These results show that the method of Adami et al. together with density diffusion is in very satisfactory agreement with the experimental results and is, overall, the best of the methods discussed here.« less

  12. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  13. Repeatability, Reproducibility, Separative Power and Subjectivity of Different Fish Morphometric Analysis Methods

    PubMed Central

    Takács, Péter

    2016-01-01

    We compared the repeatability, reproducibility (intra- and inter-measurer similarity), separative power and subjectivity (measurer effect on results) of four morphometric methods frequently used in ichthyological research, the “traditional” caliper-based (TRA) and truss-network (TRU) distance methods and two geometric methods that compare landmark coordinates on the body (GMB) and scales (GMS). In each case, measurements were performed three times by three measurers on the same specimen of three common cyprinid species (roach Rutilus rutilus (Linnaeus, 1758), bleak Alburnus alburnus (Linnaeus, 1758) and Prussian carp Carassius gibelio (Bloch, 1782)) collected from three closely-situated sites in the Lake Balaton catchment (Hungary) in 2014. TRA measurements were made on conserved specimens using a digital caliper, while TRU, GMB and GMS measurements were undertaken on digital images of the bodies and scales. In most cases, intra-measurer repeatability was similar. While all four methods were able to differentiate the source populations, significant differences were observed in their repeatability, reproducibility and subjectivity. GMB displayed highest overall repeatability and reproducibility and was least burdened by measurer effect. While GMS showed similar repeatability to GMB when fish scales had a characteristic shape, it showed significantly lower reproducability (compared with its repeatability) for each species than the other methods. TRU showed similar repeatability as the GMS. TRA was the least applicable method as measurements were obtained from the fish itself, resulting in poor repeatability and reproducibility. Although all four methods showed some degree of subjectivity, TRA was the only method where population-level detachment was entirely overwritten by measurer effect. Based on these results, we recommend a) avoidance of aggregating different measurer’s datasets when using TRA and GMS methods; and b) use of image-based methods for morphometric surveys. Automation of the morphometric workflow would also reduce any measurer effect and eliminate measurement and data-input errors. PMID:27327896

  14. Quiescent period respiratory gating for PET∕CT

    PubMed Central

    Liu, Chi; Alessio, Adam; Pierce, Larry; Thielemans, Kris; Wollenweber, Scott; Ganin, Alexander; Kinahan, Paul

    2010-01-01

    Purpose: To minimize respiratory motion artifacts, this work proposes quiescent period gating (QPG) methods that extract PET data from the end-expiration quiescent period and form a single PET frame with reduced motion and improved signal-to-noise properties. Methods: Two QPG methods are proposed and evaluated. Histogram-based quiescent period gating (H-QPG) extracts a fraction of PET data determined by a window of the respiratory displacement signal histogram. Cycle-based quiescent period gating (C-QPG) extracts data with a respiratory displacement signal below a specified threshold of the maximum amplitude of each individual respiratory cycle. Performances of both QPG methods were compared to ungated and five-bin phase-gated images across 21 FDG-PET∕CT patient data sets containing 31 thorax and abdomen lesions as well as with computer simulations driven by 1295 different patient respiratory traces. Image quality was evaluated in terms of the lesion SUVmax and the fraction of counts included in each gate as a surrogate for image noise. Results: For all the gating methods, image noise artifactually increases SUVmax when the fraction of counts included in each gate is less than 50%. While simulation data show that H-QPG is superior to C-QPG, the H-QPG and C-QPG methods lead to similar quantification-noise tradeoffs in patient data. Compared to ungated images, both QPG methods yield significantly higher lesion SUVmax. Compared to five-bin phase gating, the QPG methods yield significantly larger fraction of counts with similar SUVmax improvement. Both QPG methods result in increased lesion SUVmax for patients whose lesions have longer quiescent periods. Conclusions: Compared to ungated and phase-gated images, the QPG methods lead to images with less motion blurring and an improved compromise between SUVmax and fraction of counts. The QPG methods for respiratory motion compensation could effectively improve tumor quantification with minimal noise increase. PMID:20964223

  15. Validation of laboratory-scale recycling test method of paper PSA label products

    Treesearch

    Carl Houtman; Karen Scallon; Richard Oldack

    2008-01-01

    Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...

  16. Shot boundary detection and label propagation for spatio-temporal video segmentation

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Cahill, Nathan D.; Messinger, David

    2015-02-01

    This paper proposes a two stage algorithm for streaming video segmentation. In the first stage, shot boundaries are detected within a window of frames by comparing dissimilarity between 2-D segmentations of each frame. In the second stage, the 2-D segments are propagated across the window of frames in both spatial and temporal direction. The window is moved across the video to find all shot transitions and obtain spatio-temporal segments simultaneously. As opposed to techniques that operate on entire video, the proposed approach consumes significantly less memory and enables segmentation of lengthy videos. We tested our segmentation based shot detection method on the TRECVID 2007 video dataset and compared it with block-based technique. Cut detection results on the TRECVID 2007 dataset indicate that our algorithm has comparable results to the best of the block-based methods. The streaming video segmentation routine also achieves promising results on a challenging video segmentation benchmark database.

  17. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  18. Computerized tomographic determination of spinal bone mineral content

    NASA Technical Reports Server (NTRS)

    Cann, C. E.; Genant, H. K.

    1980-01-01

    The aims of the study were three-fold: to determine the magnitude of vertebral cancellous mineral loss in normal subjects during bedrest, to compare this loss with calcium balance and mineral loss in peripheral bones, and to use the vertebral measurements as an evaluative criterion for the C12MDP treatment and compare it with other methods. The methods used are described and the results from 14 subjects are presented.

  19. Comparative study of radiometric and calorimetric methods for total hemispherical emissivity measurements

    NASA Astrophysics Data System (ADS)

    Monchau, Jean-Pierre; Hameury, Jacques; Ausset, Patrick; Hay, Bruno; Ibos, Laurent; Candau, Yves

    2018-05-01

    Accurate knowledge of infrared emissivity is important in applications such as surface temperature measurements by infrared thermography or thermal balance for building walls. A comparison of total hemispherical emissivity measurement was performed by two laboratories: the Laboratoire National de Métrologie et d'Essais (LNE) and the Centre d'Études et de Recherche en Thermique, Environnement et Systèmes (CERTES). Both laboratories performed emissivity measurements on four samples, chosen to cover a large range of emissivity values and angular reflectance behaviors. The samples were polished aluminum (highly specular, low emissivity), bulk PVC (slightly specular, high emissivity), sandblasted aluminum (diffuse surface, medium emissivity), and aluminum paint (slightly specular surface, medium emissivity). Results obtained using five measurement techniques were compared. LNE used a calorimetric method for direct total hemispherical emissivity measurement [1], an absolute reflectometric measurement method [2], and a relative reflectometric measurement method. CERTES used two total hemispherical directional reflectometric measurement methods [3, 4]. For indirect techniques by reflectance measurements, the total hemispherical emissivity values were calculated from directional hemispherical reflectance measurement results using spectral integration when required and directional to hemispherical extrapolation. Results were compared, taking into account measurement uncertainties; an added uncertainty was introduced to account for heterogeneity over the surfaces of the samples and between samples. All techniques gave large relative uncertainties for a low emissive and very specular material (polished aluminum), and results were quite scattered. All the indirect techniques by reflectance measurement gave results within ±0.01 for a high emissivity material. A commercial aluminum paint appears to be a good candidate for producing samples with medium level of emissivity (about 0.4) and with good uniformity of emissivity values (within ±0.015).

  20. Bayesian techniques for analyzing group differences in the Iowa Gambling Task: A case study of intuitive and deliberate decision-makers.

    PubMed

    Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D

    2018-06-01

    The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.

Top