Sample records for method performs comparably

  1. THE INFLUENCE OF PHYSICAL FACTORS ON COMPARATIVE PERFORMANCE OF SAMPLING METHODS IN LARGE RIVERS

    EPA Science Inventory

    In 1999, we compared five existing benthic macroinvertebrate sampling methods used in boatable rivers. Each sampling protocol was performed at each of 60 sites distributed among four rivers in the Ohio River drainage basin. Initial comparison of methods using key macroinvertebr...

  2. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  3. Performance of the Lot Quality Assurance Sampling Method Compared to Surveillance for Identifying Inadequately-performing Areas in Matlab, Bangladesh

    PubMed Central

    Hanifi, S.M.A.; Roy, Nikhil; Streatfield, P. Kim

    2007-01-01

    This paper compared the performance of the lot quality assurance sampling (LQAS) method in identifying inadequately-performing health work-areas with that of using health and demographic surveillance system (HDSS) data and examined the feasibility of applying the method by field-level programme supervisors. The study was carried out in Matlab, the field site of ICDDR,B, where a HDSS has been in place for over 30 years. The LQAS method was applied in 57 work-areas of community health workers in ICDDR,B-served areas in Matlab during July-September 2002. The performance of the LQAS method in identifying work-areas with adequate and inadequate coverage of various health services was compared with those of the HDSS. The health service-coverage indicators included coverage of DPT, measles, BCG vaccination, and contraceptive use. It was observed that the difference in the proportion of work-areas identified to be inadequately performing using the LQAS method with less than 30 respondents, and the HDSS was not statistically significant. The consistency between the LQAS method and the HDSS in identifying work-areas was greater for adequately-performing areas than inadequately-performing areas. It was also observed that the field managers could be trained to apply the LQAS method in monitoring their performance in reaching the target population. PMID:17615902

  4. Performance of the lot quality assurance sampling method compared to surveillance for identifying inadequately-performing areas in Matlab, Bangladesh.

    PubMed

    Bhuiya, Abbas; Hanifi, S M A; Roy, Nikhil; Streatfield, P Kim

    2007-03-01

    This paper compared the performance of the lot quality assurance sampling (LQAS) method in identifying inadequately-performing health work-areas with that of using health and demographic surveillance system (HDSS) data and examined the feasibility of applying the method by field-level programme supervisors. The study was carried out in Matlab, the field site of ICDDR,B, where a HDSS has been in place for over 30 years. The LQAS method was applied in 57 work-areas of community health workers in ICDDR,B-served areas in Matlab during July-September 2002. The performance of the LQAS method in identifying work-areas with adequate and inadequate coverage of various health services was compared with those of the HDSS. The health service-coverage indicators included coverage of DPT, measles, BCG vaccination, and contraceptive use. It was observed that the difference in the proportion of work-areas identified to be inadequately performing using the LQAS method with less than 30 respondents, and the HDSS was not statistically significant. The consistency between the LQAS method and the HDSS in identifying work-areas was greater for adequately-performing areas than inadequately-performing areas. It was also observed that the field managers could be trained to apply the LQAS method in monitoring their performance in reaching the target population.

  5. A comparative study of interface reconstruction methods for multi-material ALE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  6. Comparative evaluation of performance measures for shading correction in time-lapse fluorescence microscopy.

    PubMed

    Liu, L; Kan, A; Leckie, C; Hodgkin, P D

    2017-04-01

    Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations

    ERIC Educational Resources Information Center

    Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey

    2016-01-01

    The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…

  8. Comparative study of performance of neutral axis tracking based damage detection

    NASA Astrophysics Data System (ADS)

    Soman, R.; Malinowski, P.; Ostachowicz, W.

    2015-07-01

    This paper presents a comparative study of a novel SHM technique for damage isolation. The performance of the Neutral Axis (NA) tracking based damage detection strategy is compared to other popularly used vibration based damage detection methods viz. ECOMAC, Mode Shape Curvature Method and Strain Flexibility Index Method. The sensitivity of the novel method is compared under changing ambient temperature conditions and in the presence of measurement noise. Finite Element Analysis (FEA) of the DTU 10 MW Wind Turbine was conducted to compare the local damage identification capability of each method and the results are presented. Under the conditions examined, the proposed method was found to be robust to ambient condition changes and measurement noise. The damage identification in some is either at par with the methods mentioned in the literature or better under the investigated damage scenarios.

  9. Comparative studies of copy number variation detection methods for next-generation sequencing technologies.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-01-01

    Copy number variation (CNV) has played an important role in studies of susceptibility or resistance to complex diseases. Traditional methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution of genomic regions. Following the emergence of next generation sequencing (NGS) technologies, CNV detection methods based on the short read data have recently been developed. However, due to the relatively young age of the procedures, their performance is not fully understood. To help investigators choose suitable methods to detect CNVs, comparative studies are needed. We compared six publicly available CNV detection methods: CNV-seq, FREEC, readDepth, CNVnator, SegSeq and event-wise testing (EWT). They are evaluated both on simulated and real data with different experiment settings. The receiver operating characteristic (ROC) curve is employed to demonstrate the detection performance in terms of sensitivity and specificity, box plot is employed to compare their performances in terms of breakpoint and copy number estimation, Venn diagram is employed to show the consistency among these methods, and F-score is employed to show the overlapping quality of detected CNVs. The computational demands are also studied. The results of our work provide a comprehensive evaluation on the performances of the selected CNV detection methods, which will help biological investigators choose the best possible method.

  10. Effect of patient selection method on provider group performance estimates.

    PubMed

    Thorpe, Carolyn T; Flood, Grace E; Kraft, Sally A; Everett, Christine M; Smith, Maureen A

    2011-08-01

    Performance measurement at the provider group level is increasingly advocated, but different methods for selecting patients when calculating provider group performance have received little evaluation. We compared 2 currently used methods according to characteristics of the patients selected and impact on performance estimates. We analyzed Medicare claims data for fee-for-service beneficiaries with diabetes ever seen at an academic multispeciality physician group in 2003 to 2004. We examined sample size, sociodemographics, clinical characteristics, and receipt of recommended diabetes monitoring in 2004 for the groups of patients selected using 2 methods implemented in large-scale performance initiatives: the Plurality Provider Algorithm and the Diabetes Care Home method. We examined differences among discordantly assigned patients to determine evidence for differential selection regarding these measures. Fewer patients were selected under the Diabetes Care Home method (n=3558) than the Plurality Provider Algorithm (n=4859). Compared with the Plurality Provider Algorithm, the Diabetes Care Home method preferentially selected patients who were female, not entitled because of disability, older, more likely to have hypertension, and less likely to have kidney disease and peripheral vascular disease, and had lower levels of predicted utilization. Diabetes performance was higher under Diabetes Care Home method, with 67% versus 58% receiving >1 A1c tests, 70% versus 65% receiving ≥1 low-density lipoprotein (LDL) test, and 38% versus 37% receiving an eye examination. The method used to select patients when calculating provider group performance may affect patient case mix and estimated performance levels, and warrants careful consideration when comparing performance estimates.

  11. Analysis of a virtual memory model for maintaining database views

    NASA Technical Reports Server (NTRS)

    Kinsley, Kathryn C.; Hughes, Charles E.

    1992-01-01

    This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.

  12. Comparing Evolutionary Programs and Evolutionary Pattern Search Algorithms: A Drug Docking Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1999-02-10

    Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and theymore » suggest that EPSAs may be more robust on larger, more complex problems.« less

  13. Methods for comparative evaluation of propulsion system designs for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.

    1976-01-01

    The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.

  14. High performance bonded neo magnets using high density compaction

    NASA Astrophysics Data System (ADS)

    Herchenroeder, J.; Miller, D.; Sheth, N. K.; Foo, M. C.; Nagarathnam, K.

    2011-04-01

    This paper presents a manufacturing method called Combustion Driven Compaction (CDC) for the manufacture of isotropic bonded NdFeB magnets (bonded Neo). Magnets produced by the CDC method have density up to 6.5 g/cm3 which is 7-10% higher compared to commercially available bonded Neo magnets of the same shape. The performance of an actual seat motor with a representative CDC ring magnet is presented and compared with the seat motor performance with both commercial isotropic bonded Neo and anisotropic NdFeB rings of the same geometry. The comparisons are made at both room and elevated temperatures. The airgap flux for the magnet produced by the proposed method is 6% more compared to the commercial isotropic bonded Neo magnet. After exposure to high temperature due to the superior thermal aging stability of isotropic NdFeB powders the motor performance with this material is comparable to the motor performance with an anisotropic NdFeB magnet.

  15. Reach and Cost-Effectiveness of the PrePex Device for Safe Male Circumcision in Uganda

    PubMed Central

    Duffy, Kevin; Galukande, Moses; Wooding, Nick; Dea, Monica; Coutinho, Alex

    2013-01-01

    Introduction Modelling, supported by the USAID Health Policy Initiative and UNAIDS, performed in 2011, indicated that Uganda would need to perform 4.2 million medical male circumcisions (MMCs) to reach 80% prevalence. Since 2010 Uganda has completed 380,000 circumcisions, and has set a national target of 1 million for 2013. Objective To evaluate the relative reach and cost-effectiveness of PrePex compared to the current surgical SMC method and to determine the effect that this might have in helping to achieve the Uganda national SMC targets. Methods A cross-sectional descriptive cost-analysis study conducted at International Hospital Kampala over ten weeks from August to October 2012. Data collected during the performance of 625 circumcisions using PrePex was compared to data previously collected from 10,000 circumcisions using a surgical circumcision method at the same site. Ethical approval was obtained. Results The moderate adverse events (AE) ratio when using the PrePex device was 2% and no severe adverse events were encountered, which is comparable to the surgical method, thus the AE rate has no effect on the reach or cost-effectiveness of PrePex. The unit cost to perform one circumcision using PrePex is $30.55, 35% ($7.90) higher than the current surgical method, but the PrePex method improves operator efficiency by 60%, meaning that a team can perform 24 completed circumcisions compared to 15 by the surgical method. The cost-effectiveness of PrePex, comparing the cost of performing circumcisions to the future cost savings of potentially averted HIV infections, is just 2% less than the current surgical method, at a device cost price of $20. Conclusion PrePex is a viable SMC tool for scale-up with unrivalled potential for superior reach, however national targets can only be met with effective demand creation and availability of trained human resource. PMID:23717402

  16. The importance of quality control in validating concentrations ...

    EPA Pesticide Factsheets

    A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. This paper compares the method performance of six analytical methods used to measure 174 emer

  17. A comparative study of the effect of triage training by role-playing and educational video on the knowledge and performance of emergency medical service staffs in Iran.

    PubMed

    Aghababaeian, Hamidreza; Sedaghat, Soheila; Tahery, Noorallah; Moghaddam, Ali Sadeghi; Maniei, Mohammad; Bahrami, Nosrat; Ahvazi, Ladan Araghi

    2013-12-01

    Educating emergency medical staffs in triage skills is an important aspect of disaster preparedness. The aim of the study was to compare the effect of role-playing and educational video presentation on the learning and performance of the emergency medical service staffs in Khozestan, Iran A total of 144 emergency technicians were randomly classified into two groups. A researcher trained the first group using an educational video method and the second group with a role-playing method. Data were collected before, immediately, and 15 days after training using a questionnaire covering the three domains of demographic information, triage knowledge, and triage performance. The data were analyzed using defined knowledge and performance parameters. There was no significant difference between the two training methods on performance and immediate knowledge (P = .2), lasting knowledge (P=.05) and immediate performance (P = .35), but there was a statistical advantage for the role-playing method on lasting performance (P = .02). The two educational methods equally increase knowledge and performance, but the role-playing method may have a more desirable and lasting effect on performance.

  18. Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.

    PubMed

    Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan

    2015-03-01

    A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.

  19. Theoretical and experimental comparative analysis of beamforming methods for loudspeaker arrays under given performance constraints

    NASA Astrophysics Data System (ADS)

    Olivieri, Ferdinando; Fazi, Filippo Maria; Nelson, Philip A.; Shin, Mincheol; Fontana, Simone; Yue, Lang

    2016-07-01

    Methods for beamforming are available that provide the signals used to drive an array of sources for the implementation of systems for the so-called personal audio. In this work, performance of the delay-and-sum (DAS) method and of three widely used methods for optimal beamforming are compared by means of computer simulations and experiments in an anechoic environment using a linear array of sources with given constraints on quality of the reproduced field at the listener's position and limit to input energy to the array. Using the DAS method as a benchmark for performance, the frequency domain responses of the loudspeaker filters can be characterized in three regions. In the first region, at low frequencies, input signals designed with the optimal methods are identical and provide higher directivity performance than that of the DAS. In the second region, performance of the optimal methods are similar to the DAS method. The third region starts above the limit due to spatial aliasing. A method is presented to estimate the boundaries of these regions.

  20. Application of wavelet and Fuorier transforms as powerful alternatives for derivative spectrophotometry in analysis of binary mixtures: A comparative study

    NASA Astrophysics Data System (ADS)

    Hassan, Said A.; Abdel-Gawad, Sherif A.

    2018-02-01

    Two signal processing methods, namely, Continuous Wavelet Transform (CWT) and the second was Discrete Fourier Transform (DFT) were introduced as alternatives to the classical Derivative Spectrophotometry (DS) in analysis of binary mixtures. To show the advantages of these methods, a comparative study was performed on a binary mixture of Naltrexone (NTX) and Bupropion (BUP). The methods were compared by analyzing laboratory prepared mixtures of the two drugs. By comparing performance of the three methods, it was proved that CWT and DFT methods are more efficient and advantageous in analysis of mixtures with overlapped spectra than DS. The three signal processing methods were adopted for the quantification of NTX and BUP in pure and tablet forms. The adopted methods were validated according to the ICH guideline where accuracy, precision and specificity were found to be within appropriate limits.

  1. Referenceless MR thermometry-a comparison of five methods.

    PubMed

    Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho

    2017-01-07

    Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.

  2. Referenceless MR thermometry—a comparison of five methods

    NASA Astrophysics Data System (ADS)

    Zou, Chao; Tie, Changjun; Pan, Min; Wan, Qian; Liang, Changhong; Liu, Xin; Chung, Yiu-Cho

    2017-01-01

    Proton resonance frequency shift (PRFS) MR thermometry is commonly used to measure temperature in thermotherapy. The method requires a baseline temperature map and is therefore motion sensitive. Several referenceless MR thermometry methods were proposed to address this problem but their performances have never been compared. This study compared the performance of five referenceless methods through simulation, heating of ex vivo tissues and in vivo imaging of the brain and liver of healthy volunteers. Mean, standard deviation, root mean square, 2/98 percentiles of error were used as performance metrics. Probability density functions (PDF) of the error distribution for these methods in the different tests were also compared. The results showed that the phase gradient method (PG) exhibited largest error in all scenarios. The original method (ORG) and the complex field estimation method (CFE) had similar performance in all experiments. The phase finite difference method (PFD) and the near harmonic method (NH) were better than other methods, especially in the lower signal-to-noise ratio (SNR) and fast changing field cases. Except for PG, the PDFs of each method were very similar among the different experiments. Since phase unwrapping in ORG and NH is computationally demanding and subject to image SNR, PFD and CFE would be good choices as they do not need phase unwrapping. The results here would facilitate the choice of appropriate referenceless methods in various MR thermometry applications.

  3. Comparing the Effect of Thinking Maps Training Package Developed by the Thinking Maps Method on the Reading Performance of Dyslexic Students

    ERIC Educational Resources Information Center

    Faramarzi, Salar; Moradi, Mohammadreza; Abedi, Ahmad

    2018-01-01

    The present study aimed to develop the thinking maps training package and compare its training effect with the thinking maps method on the reading performance of second and fifth grade of elementary school male dyslexic students. For this mixed method exploratory study, from among the above mentioned grades' students in Isfahan, 90 students who…

  4. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    NASA Astrophysics Data System (ADS)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  5. Comparing Indirect Effects in Different Groups in Single-Group and Multi-Group Structural Equation Models

    PubMed Central

    Ryu, Ehri; Cheong, Jeewon

    2017-01-01

    In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248

  6. Forecasting electricity usage using univariate time series models

    NASA Astrophysics Data System (ADS)

    Hock-Eam, Lim; Chee-Yin, Yip

    2014-12-01

    Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.

  7. Comparing DIF Methods for Data with Dual Dependency

    ERIC Educational Resources Information Center

    Jin, Ying; Kang, Minsoo

    2016-01-01

    Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…

  8. Influenza detection and prediction algorithms: comparative accuracy trial in Östergötland county, Sweden, 2008-2012.

    PubMed

    Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T

    2017-07-01

    Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.

  9. Roka Listeria detection method using transcription mediated amplification to detect Listeria species in select foods and surfaces. Performance Tested Method(SM) 011201.

    PubMed

    Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele

    2012-01-01

    The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.

  10. A Residual Mass Ballistic Testing Method to Compare Armor Materials or Components (Residual Mass Ballistic Testing Method)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benjamin Langhorst; Thomas M Lillo; Henry S Chu

    2014-05-01

    A statistics based ballistic test method is presented for use when comparing multiple groups of test articles of unknown relative ballistic perforation resistance. The method is intended to be more efficient than many traditional methods for research and development testing. To establish the validity of the method, it is employed in this study to compare test groups of known relative ballistic performance. Multiple groups of test articles were perforated using consistent projectiles and impact conditions. Test groups were made of rolled homogeneous armor (RHA) plates and differed in thickness. After perforation, each residual projectile was captured behind the target andmore » its mass was measured. The residual masses measured for each test group were analyzed to provide ballistic performance rankings with associated confidence levels. When compared to traditional V50 methods, the residual mass (RM) method was found to require fewer test events and be more tolerant of variations in impact conditions.« less

  11. A Comparative Study of Performance in the Conners' Continuous Performance Test between Brazilian and North American Children

    ERIC Educational Resources Information Center

    Miranda, Monica Carolina; Sinnes, Elaine Girao; Pompeia, Sabine; Bueno, Orlando Francisco Amodeo

    2008-01-01

    Objective: The present study investigated the performance of Brazilian children in the Continuous Performance Test, CPT-II, and compared results to those of the norms obtained in the United States. Method: The U.S. norms were compared to those of a Brazilian sample composed of 6- to 11-year-olds separated into 4 age-groups (half boys) that…

  12. Comparative Study of Hand-Sutured versus Circular Stapled Anastomosis for Gastrojejunostomy in Laparoscopy Assisted Distal Gastrectomy.

    PubMed

    Seo, Su Hyun; Kim, Ki Han; Kim, Min Chan; Choi, Hong Jo; Jung, Ghap Joong

    2012-06-01

    Mechanical stapler is regarded as a good alternative to the hand sewing technique, when used in gastric reconstruction. The circular stapling method has been widely applied to gastrectomy (open orlaparoscopic), for gastric cancer. We illustrated and compared the hand-sutured method to the circular stapling method, for Billroth-II, in patients who underwent laparoscopy assisted distal gastrectomy for gastric cancer. Between April 2009 and May 2011, 60 patients who underwent laparoscopy assisted distal gastrectomy, with Billroth-II, were enrolled. Hand-sutured Billroth-II was performed in 40 patients (manual group) and circular stapler Billroth-II was performed in 20 patients (stapler group). Clinicopathological features and post-operative outcomes were evaluated and compared between the two groups. Nosignificant differences were observed in clinicopathologic parameters and post-operative outcomes, except in the operation times. Operation times and anastomosis times were significantly shorter in the stapler group (P=0.004 and P<0.001). Compared to the hand-sutured method, the circular stapling method can be applied safely and more efficiently, when performing Billroth-II anastomosis, after laparoscopy assisted distal gastrectomy in patients with gastric cancer.

  13. Impact of abbreviated lecture with interactive mini-cases vs traditional lecture on student performance in the large classroom.

    PubMed

    Marshall, Leisa L; Nykamp, Diane L; Momary, Kathryn M

    2014-12-15

    To compare the impact of 2 different teaching and learning methods on student mastery of learning objectives in a pharmacotherapy module in the large classroom setting. Two teaching and learning methods were implemented and compared in a required pharmacotherapy module for 2 years. The first year, multiple interactive mini-cases with inclass individual assessment and an abbreviated lecture were used to teach osteoarthritis; a traditional lecture with 1 inclass case discussion was used to teach gout. In the second year, the same topics were used but the methods were flipped. Student performance on pre/post individual readiness assessment tests (iRATs), case questions, and subsequent examinations were compared each year by the teaching and learning method and then between years by topic for each method. Students also voluntarily completed a 20-item evaluation of the teaching and learning methods. Postpresentation iRATs were significantly higher than prepresentation iRATs for each topic each year with the interactive mini-cases; there was no significant difference in iRATs before and after traditional lecture. For osteoarthritis, postpresentation iRATs after interactive mini-cases in year 1 were significantly higher than postpresentation iRATs after traditional lecture in year 2; the difference in iRATs for gout per learning method was not significant. The difference between examination performance for osteoarthritis and gout was not significant when the teaching and learning methods were compared. On the student evaluations, 2 items were significant both years when answers were compared by teaching and learning method. Each year, students ranked their class participation higher with interactive cases than with traditional lecture, but both years they reported enjoying the traditional lecture format more. Multiple interactive mini-cases with an abbreviated lecture improved immediate mastery of learning objectives compared to a traditional lecture format, regardless of therapeutic topic, but did not improve student performance on subsequent examinations.

  14. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis.

    PubMed

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-12-13

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.

  15. Performance Evaluation and Online Realization of Data-driven Normalization Methods Used in LC/MS based Untargeted Metabolomics Analysis

    PubMed Central

    Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng

    2016-01-01

    In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387

  16. Comparative Study of Hand-Sutured versus Circular Stapled Anastomosis for Gastrojejunostomy in Laparoscopy Assisted Distal Gastrectomy

    PubMed Central

    Seo, Su Hyun; Kim, Min Chan; Choi, Hong Jo; Jung, Ghap Joong

    2012-01-01

    Purpose Mechanical stapler is regarded as a good alternative to the hand sewing technique, when used in gastric reconstruction. The circular stapling method has been widely applied to gastrectomy (open orlaparoscopic), for gastric cancer. We illustrated and compared the hand-sutured method to the circular stapling method, for Billroth-II, in patients who underwent laparoscopy assisted distal gastrectomy for gastric cancer. Materials and Methods Between April 2009 and May 2011, 60 patients who underwent laparoscopy assisted distal gastrectomy, with Billroth-II, were enrolled. Hand-sutured Billroth-II was performed in 40 patients (manual group) and circular stapler Billroth-II was performed in 20 patients (stapler group). Clinicopathological features and post-operative outcomes were evaluated and compared between the two groups. Results Nosignificant differences were observed in clinicopathologic parameters and post-operative outcomes, except in the operation times. Operation times and anastomosis times were significantly shorter in the stapler group (P=0.004 and P<0.001). Conclusions Compared to the hand-sutured method, the circular stapling method can be applied safely and more efficiently, when performing Billroth-II anastomosis, after laparoscopy assisted distal gastrectomy in patients with gastric cancer. PMID:22792525

  17. PET Timing Performance Measurement Method Using NEMA NEC Phantom

    NASA Astrophysics Data System (ADS)

    Wang, Gin-Chung; Li, Xiaoli; Niu, Xiaofeng; Du, Huini; Balakrishnan, Karthik; Ye, Hongwei; Burr, Kent

    2016-06-01

    When comparing the performance of time-of-flight whole-body PET scanners, timing resolution is one important benchmark. Timing performance is heavily influenced by detector and electronics design. Even for the same scanner design, measured timing resolution is a function of many factors including the activity concentration, geometry and positioning of the radioactive source. Due to lack of measurement standards, the timing resolutions reported in the literature may not be directly comparable and may not describe the timing performance under clinically relevant conditions. In this work we introduce a method which makes use of the data acquired during the standard NEMA Noise-Equivalent-Count-Rate (NECR) measurements, and compare it to several other timing resolution measurement methods. The use of the NEMA NEC phantom, with well-defined dimensions and radioactivity distribution, is attractive because it has been widely accepted in the industry and allows for the characterization of timing resolution across a more relevant range of conditions.

  18. Delineating Species with DNA Barcodes: A Case of Taxon Dependent Method Performance in Moths

    PubMed Central

    Kekkonen, Mari; Mutanen, Marko; Kaila, Lauri; Nieminen, Marko; Hebert, Paul D. N.

    2015-01-01

    The accelerating loss of biodiversity has created a need for more effective ways to discover species. Novel algorithmic approaches for analyzing sequence data combined with rapidly expanding DNA barcode libraries provide a potential solution. While several analytical methods are available for the delineation of operational taxonomic units (OTUs), few studies have compared their performance. This study compares the performance of one morphology-based and four DNA-based (BIN, parsimony networks, ABGD, GMYC) methods on two groups of gelechioid moths. It examines 92 species of Finnish Gelechiinae and 103 species of Australian Elachistinae which were delineated by traditional taxonomy. The results reveal a striking difference in performance between the two taxa with all four DNA-based methods. OTU counts in the Elachistinae showed a wider range and a relatively low (ca. 65%) OTU match with reference species while OTU counts were more congruent and performance was higher (ca. 90%) in the Gelechiinae. Performance rose when only monophyletic species were compared, but the taxon-dependence remained. None of the DNA-based methods produced a correct match with non-monophyletic species, but singletons were handled well. A simulated test of morphospecies-grouping performed very poorly in revealing taxon diversity in these small, dull-colored moths. Despite the strong performance of analyses based on DNA barcodes, species delineated using single-locus mtDNA data are best viewed as OTUs that require validation by subsequent integrative taxonomic work. PMID:25849083

  19. Using string invariants for prediction searching for optimal parameters

    NASA Astrophysics Data System (ADS)

    Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard

    2016-02-01

    We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.

  20. The Performance of Methods to Test Upper-Level Mediation in the Presence of Nonnormal Data

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.

    2008-01-01

    A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed…

  1. Comparative analysis of the modified enclosed energy metric for self-focusing holograms from digital lensless holographic microscopy.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2015-06-01

    A comparative analysis of the performance of the modified enclosed energy (MEE) method for self-focusing holograms recorded with digital lensless holographic microscopy is presented. Notwithstanding the MEE analysis previously published, no extended analysis of its performance has been reported. We have tested the MEE in terms of the minimum axial distance allowed between the set of reconstructed holograms to search for the focal plane and the elapsed time to obtain the focused image. These parameters have been compared with those for some of the already reported methods in the literature. The MEE achieves better results in terms of self-focusing quality but at a higher computational cost. Despite its longer processing time, the method remains within a time frame to be technologically attractive. Modeled and experimental holograms have been utilized in this work to perform the comparative study.

  2. Comparative assessment of three standardized robotic surgery training methods.

    PubMed

    Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C

    2013-10-01

    To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.

  3. Comparative Evaluation of Quantitative Test Methods for Gases on a Hard Surface

    DTIC Science & Technology

    2017-02-01

    COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE ECBC-TR-1426 Vipin Rastogi...1 COMPARATIVE EVALUATION OF QUANTITATIVE TEST METHODS FOR GASES ON A HARD SURFACE 1. INTRODUCTION Members of the U.S. Environmental...Generator 4 2.4 Experimental Design Each quantitative method was performed three times on three consecutive days. For the CD runs, three

  4. Comparison of Dry Medium Culture Plates for Mesophilic Aerobic Bacteria in Milk, Ice Cream, Ham, and Codfish Fillet Products

    PubMed Central

    Park, Junghyun; Kim, Myunghee

    2013-01-01

    This study was performed to compare the performance of Sanita-Kun dry medium culture plate with those of traditional culture medium and Petrifilm dry medium culture plate for the enumeration of the mesophilic aerobic bacteria in milk, ice cream, ham, and codfish fillet. Mesophilic aerobic bacteria were comparatively evaluated in milk, ice cream, ham, and codfish fillet using Sanita-Kun aerobic count (SAC), Petrifilm aerobic count (PAC), and traditional plate count agar (PCA) media. According to the results, all methods showed high correlations of 0.989~1.000 and no significant differences were observed for enumerating the mesophilic aerobic bacteria in the tested food products. SAC method was easier to perform and count colonies efficiently as compared to the PCA and PAC methods. Therefore, we concluded that the SAC method offers an acceptable alternative to the PCA and PAC methods for counting the mesophilic aerobic bacteria in milk, ice cream, ham, and codfish fillet products. PMID:24551829

  5. Clinical efficacy of electronic apex locators: systematic review.

    PubMed

    Martins, Jorge N R; Marques, Duarte; Mata, António; Caramês, João

    2014-06-01

    Apical constriction has been proposed as the most appropriate apical limit for the endodontic working length. Despite being the most used, some limitations are attributed to the radiographic method of working length determination. It lacks precision because it is based on the average position of the apical constriction. The electronic apex locators have been presented as an alternative to the odontometry performed by radiography. These devices detect the transition of the pulp to the periodontal tissue, which is anatomically very close to the apical constriction and may perform with improved accuracy. A systematic review was performed to compare the radiographic and electronic methods. Clinical studies that compared both methods were searched for on 7 electronic databases, a manual search was performed on the bibliography of articles collected on the electronic databases, and the authors were contacted to ask for references of more research not detected on the electronic and manual search. Twenty-one articles were selected. The majority were comparative or evaluation studies, and very few clinical studies comparing both methods are available. Several methodological limitations are present on the collected articles and debated in this review. Although the available scientific evidence base is short and at considerable risk of bias, it is still possible to conclude that the apical locator reduces the patient radiation exposure and also that the electronic method may perform better on the working length determination. At least one radiographic control should be performed to detect possible errors of the electronic devices. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  6. Performance of the DFTB method in comparison to DFT and semiempirical methods for geometries and energies of C20-C86fullerene isomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Guishan; Irle, Stephan; Morokuma, Keiji

    2005-07-20

    The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. The performance of both non-iterative (NCC) and self-consistent charge (SCC) versions of the density functional tight binding (DFTB) method, as well as AM1 and PM3 methods, has been compared with the B3LYP method, a hybrid density functional theory (DFT) method, for equilibrium geometries and relative energies of various isomers of C20–C86 fullerenes. Both NCC- and SCCDFTB methods compare very favorablymore » with B3LYP both in geometries and isomer relative energies, while AM1 and PM3 do noticeably worse.« less

  7. Isotonic Regression Based-Method in Quantitative High-Throughput Screenings for Genotoxicity

    PubMed Central

    Fujii, Yosuke; Narita, Takeo; Tice, Raymond Richard; Takeda, Shunich

    2015-01-01

    Quantitative high-throughput screenings (qHTSs) for genotoxicity are conducted as part of comprehensive toxicology screening projects. The most widely used method is to compare the dose-response data of a wild-type and DNA repair gene knockout mutants, using model-fitting to the Hill equation (HE). However, this method performs poorly when the observed viability does not fit the equation well, as frequently happens in qHTS. More capable methods must be developed for qHTS where large data variations are unavoidable. In this study, we applied an isotonic regression (IR) method and compared its performance with HE under multiple data conditions. When dose-response data were suitable to draw HE curves with upper and lower asymptotes and experimental random errors were small, HE was better than IR, but when random errors were big, there was no difference between HE and IR. However, when the drawn curves did not have two asymptotes, IR showed better performance (p < 0.05, exact paired Wilcoxon test) with higher specificity (65% in HE vs. 96% in IR). In summary, IR performed similarly to HE when dose-response data were optimal, whereas IR clearly performed better in suboptimal conditions. These findings indicate that IR would be useful in qHTS for comparing dose-response data. PMID:26673567

  8. Determination of Total Selenium in Infant Formulas: Comparison of the Performance of FIA and MCFA Flow Systems

    PubMed Central

    Pistón, Mariela; Knochen, Moisés

    2012-01-01

    Two flow methods, based, respectively, on flow-injection analysis (FIA) and on multicommutated flow analysis (MCFA), were compared with regard to their use for the determination of total selenium in infant formulas by hydride-generation atomic absorption spectrometry. The method based on multicommutation provided lower detection and quantification limits (0.08 and 0.27 μg L−1 compared to 0.59 and 1.95 μ L−1, resp.), higher sampling frequency (160 versus. 70 samples per hour), and reduced reagent consumption. Linearity, precision, and accuracy were similar for the two methods compared. It was concluded that, while both methods proved to be appropriate for the purpose, the MCFA-based method exhibited a better performance. PMID:22505923

  9. An improved partial least-squares regression method for Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  10. Pulse Transit Time Measurement Using Seismocardiogram, Photoplethysmogram, and Acoustic Recordings: Evaluation and Comparison.

    PubMed

    Yang, Chenxi; Tavassolian, Negar

    2018-05-01

    This work proposes a novel method of pulse transit time (PTT) measurement. The proximal arterial location data are collected from seismocardiogram (SCG) recordings by placing a micro-electromechanical accelerometer on the chest wall. The distal arterial location data are recorded using an acoustic sensor placed inside the ear. The performance of distal location recordings is evaluated by comparing SCG-acoustic and SCG-photoplethysmogram (PPG) measurements. PPG and acoustic performances under motion noise are also compared. Experimental results suggest comparable performances for the acoustic-based and PPG-based devices. The feasibility of each PTT measurement method is validated for blood pressure evaluations and its limitations are analyzed.

  11. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    NASA Astrophysics Data System (ADS)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  12. Comparative Performance of Engines Using a Carburetor, Manifold Injection, and Cylinder Injection

    NASA Technical Reports Server (NTRS)

    Schey, Oscar W; Clark, J Denny

    1939-01-01

    The comparative performance was determined of engines using three methods of mixing the fuel and the air: the use of a carburetor, manifold injection, and cylinder injection. The tests were made of a single-cylinder engine with a Wright 1820-G air-cooled cylinder. Each method of mixing the fuel and the air was investigated over a range of fuel-air ratios from 0.10 to the limit of stable operation and at engine speeds of 1,500 and 1,900 r.p.m. The comparative performance with a fuel-air ratio of 0.08 was investigated for speeds from 1,300 to 1,900 r.p.m. The results show that the power obtained with each method closely followed the volumetric efficiency; the power was therefore the highest with cylinder injection because this method had less manifold restriction. The values of minimum specific fuel consumption obtained with each method of mixing of fuel and air were the same. For the same engine and cooling conditions, the cylinder temperatures are the same regardless of the method used for mixing the fuel and the air.

  13. 75 FR 14170 - Medical Device Epidemiology Network: Developing Partnership Between the Center for Devices and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-24

    ... methods for medical device comparative analyses, best practices and best design and analysis methods. II... the performance of medical devices (including comparative effectiveness studies). The centers...

  14. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  15. Using the surface panel method to predict the steady performance of ducted propellers

    NASA Astrophysics Data System (ADS)

    Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long

    2009-12-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.

  16. The contribution of Raman spectroscopy to the analytical quality control of cytotoxic drugs in a hospital environment: eliminating the exposure risks for staff members and their work environment.

    PubMed

    Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette

    2014-08-15

    The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Comparing the performance of FA, DFA and DMA using different synthetic long-range correlated time series

    PubMed Central

    Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier

    2012-01-01

    Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785

  18. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition

    PubMed Central

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-01-01

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle). PMID:28608824

  19. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition.

    PubMed

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-06-13

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).

  20. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  1. Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system

    NASA Astrophysics Data System (ADS)

    Bai, Jianbo; Li, Yang; Chen, Jianhao

    2018-02-01

    The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.

  2. Evaluation of Course-Specific Self-Efficacy Assessment Methods.

    ERIC Educational Resources Information Center

    Bong, Mimi

    A study was conducted to compare three methods of assessing course-level self-efficacy beliefs within a multitrait multimethod (MTMM) framework. The methods involved: (1) successfully performing a number of domain-related tasks; (2) obtaining specific letter grades in the course; and (3) successfully performing generic academic tasks in the…

  3. Bayesian techniques for analyzing group differences in the Iowa Gambling Task: A case study of intuitive and deliberate decision-makers.

    PubMed

    Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D

    2018-06-01

    The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.

  4. A simple high performance liquid chromatography method for analyzing paraquat in soil solution samples.

    PubMed

    Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter

    2004-01-01

    A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.

  5. Improving gross anatomy learning using reciprocal peer teaching.

    PubMed

    Manyama, Mange; Stafford, Renae; Mazyala, Erick; Lukanima, Anthony; Magele, Ndulu; Kidenya, Benson R; Kimwaga, Emmanuel; Msuya, Sifael; Kauki, Julius

    2016-03-22

    The use of cadavers in human anatomy teaching requires adequate number of anatomy instructors who can provide close supervision of the students. Most medical schools are facing challenges of lack of trained individuals to teach anatomy. Innovative techniques are therefore needed to impart adequate and relevant anatomical knowledge and skills. This study was conducted in order to evaluate the traditional teaching method and reciprocal peer teaching (RPT) method during anatomy dissection. Debriefing surveys were administered to the 227 first year medical students regarding merits, demerits and impact of both RPT and Traditional teaching experiences on student's preparedness prior to dissection, professionalism and communication skills. Out of this, 159 (70 %) completed the survey on traditional method while 148 (65.2 %) completed survey on RPT method. An observation tool for anatomy faculty was used to assess collaboration, professionalism and teaching skills among students. Student's scores on examinations done before introduction of RPT were compared with examinations scores after introduction of RPT. Our results show that the mean performance of students on objective examinations was significantly higher after introduction of RPT compared to the performance before introduction of RPT [63.7 ± 11.4 versus 58.6 ± 10, mean difference 5.1; 95 % CI = 4.0-6.3; p-value < 0.0001]. Students with low performance prior to RPT benefited more in terms of examination performance compared to those who had higher performance [Mean difference 7.6; p-value < 0.0001]. Regarding student's opinions on traditional method versus RPT, 83 % of students either agreed or strongly agreed that they were more likely to read the dissection manual before the RPT dissection session compared to 35 % for the traditional method. Over 85 % of respondents reported that RPT improved their confidence and ability to present information to peers and faculty compared to 38 % for the tradition method. The majority of faculty reported that the learning environment of the dissection groups was very active learning during RPT sessions and that professionalism was observed by most students during discussions. Introduction of RPT in our anatomy dissection laboratory was generally beneficial to both students and faculty. Both objective (student performance) and subjective data indicate that RPT improved student's performance and had a positive learning experience impact. Our future plan is to continue RPT practice and continually evaluate the RPT protocol.

  6. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  7. Comparative performance of two quantitative safety signalling methods: implications for use in a pharmacovigilance department.

    PubMed

    Almenoff, June S; LaCroix, Karol K; Yuen, Nancy A; Fram, David; DuMouchel, William

    2006-01-01

    There is increasing interest in using disproportionality-based signal detection methods to support postmarketing safety surveillance activities. Two commonly used methods, empirical Bayes multi-item gamma Poisson shrinker (MGPS) and proportional reporting ratio (PRR), perform differently with respect to the number and types of signals detected. The goal of this study was to compare and analyse the performance characteristics of these two methods, to understand why they differ and to consider the practical implications of these differences for a large, industry-based pharmacovigilance department. We compared the numbers and types of signals of disproportionate reporting (SDRs) obtained with MGPS and PRR using two postmarketing safety databases and a simulated database. We recorded signal counts and performed a qualitative comparison of the drug-event combinations signalled by the two methods as well as a sensitivity analysis to better understand how the thresholds commonly used for these methods impact their performance. PRR detected more SDRs than MGPS. We observed that MGPS is less subject to confounding by demographic factors because it employs stratification and is more stable than PRR when report counts are low. Simulation experiments performed using published empirical thresholds demonstrated that PRR detected false-positive signals at a rate of 1.1%, while MGPS did not detect any statistical false positives. In an attempt to separate the effect of choice of signal threshold from more fundamental methodological differences, we performed a series of experiments in which we modified the conventional threshold values for each method so that each method detected the same number of SDRs for the example drugs studied. This analysis, which provided quantitative examples of the relationship between the published thresholds for the two methods, demonstrates that the signalling criterion published for PRR has a higher signalling frequency than that published for MGPS. The performance differences between the PRR and MGPS methods are related to (i) greater confounding by demographic factors with PRR; (ii) a higher tendency of PRR to detect false-positive signals when the number of reports is small; and (iii) the conventional thresholds that have been adapted for each method. PRR tends to be more 'sensitive' and less 'specific' than MGPS. A high-specificity disproportionality method, when used in conjunction with medical triage and investigation of critical medical events, may provide an efficient and robust approach to applying quantitative methods in routine postmarketing pharmacovigilance.

  8. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    PubMed

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .

  9. Successful Developmental Math Students in Traditional Format and Online Delivery: A Comparative Study

    ERIC Educational Resources Information Center

    Thomas, Jeremy Lloyd

    2016-01-01

    The purpose of this study was to compare student performance in online and traditional classroom based developmental math courses at Texas community colleges. This study specifically examined: (a) student performance in both delivery methods, (b) students who successfully completed the developmental math course, and (c) student performance in the…

  10. [Comparative clinical analysis of cesarean section technique by Misgav Ladach method and Pfennenstiel method].

    PubMed

    Popiela, A; Pańszczyk, M; Korzeniewski, J; Baranowski, W

    2000-04-01

    Clinical and biochemical parameters were analysed in 55 patients who underwent a caesarean section performed using Misgav Ladach method compared to reference group of 41 patients who underwent caesarean section using Pfannenstiel method. Shortened operation time, shortened hospitalisation time and less postoperative morbidity were observed in the Misgav Ladach group. This kind of method seems to have advantages in comparison to Pfannenstiel method.

  11. Ascertainment bias from imputation methods evaluation in wheat.

    PubMed

    Brandariz, Sofía P; González Reymúndez, Agustín; Lado, Bettina; Malosetti, Marcos; Garcia, Antonio Augusto Franco; Quincke, Martín; von Zitzewitz, Jarislav; Castro, Marina; Matus, Iván; Del Pozo, Alejandro; Castro, Ariel J; Gutiérrez, Lucía

    2016-10-04

    Whole-genome genotyping techniques like Genotyping-by-sequencing (GBS) are being used for genetic studies such as Genome-Wide Association (GWAS) and Genomewide Selection (GS), where different strategies for imputation have been developed. Nevertheless, imputation error may lead to poor performance (i.e. smaller power or higher false positive rate) when complete data is not required as it is for GWAS, and each marker is taken at a time. The aim of this study was to compare the performance of GWAS analysis for Quantitative Trait Loci (QTL) of major and minor effect using different imputation methods when no reference panel is available in a wheat GBS panel. In this study, we compared the power and false positive rate of dissecting quantitative traits for imputed and not-imputed marker score matrices in: (1) a complete molecular marker barley panel array, and (2) a GBS wheat panel with missing data. We found that there is an ascertainment bias in imputation method comparisons. Simulating over a complete matrix and creating missing data at random proved that imputation methods have a poorer performance. Furthermore, we found that when QTL were simulated with imputed data, the imputation methods performed better than the not-imputed ones. On the other hand, when QTL were simulated with not-imputed data, the not-imputed method and one of the imputation methods performed better for dissecting quantitative traits. Moreover, larger differences between imputation methods were detected for QTL of major effect than QTL of minor effect. We also compared the different marker score matrices for GWAS analysis in a real wheat phenotype dataset, and we found minimal differences indicating that imputation did not improve the GWAS performance when a reference panel was not available. Poorer performance was found in GWAS analysis when an imputed marker score matrix was used, no reference panel is available, in a wheat GBS panel.

  12. Comparison of parameter-adapted segmentation methods for fluorescence micrographs.

    PubMed

    Held, Christian; Palmisano, Ralf; Häberle, Lothar; Hensel, Michael; Wittenberg, Thomas

    2011-11-01

    Interpreting images from fluorescence microscopy is often a time-consuming task with poor reproducibility. Various image processing routines that can help investigators evaluate the images are therefore useful. The critical aspect for a reliable automatic image analysis system is a robust segmentation algorithm that can perform accurate segmentation for different cell types. In this study, several image segmentation methods were therefore compared and evaluated in order to identify the most appropriate segmentation schemes that are usable with little new parameterization and robustly with different types of fluorescence-stained cells for various biological and biomedical tasks. The study investigated, compared, and enhanced four different methods for segmentation of cultured epithelial cells. The maximum-intensity linking (MIL) method, an improved MIL, a watershed method, and an improved watershed method based on morphological reconstruction were used. Three manually annotated datasets consisting of 261, 817, and 1,333 HeLa or L929 cells were used to compare the different algorithms. The comparisons and evaluations showed that the segmentation performance of methods based on the watershed transform was significantly superior to the performance of the MIL method. The results also indicate that using morphological opening by reconstruction can improve the segmentation of cells stained with a marker that exhibits the dotted surface of cells. Copyright © 2011 International Society for Advancement of Cytometry.

  13. Emergent surgical airway: comparison of the three-step method and conventional cricothyroidotomy utilizing high-fidelity simulation.

    PubMed

    Quick, Jacob A; MacIntyre, Allan D; Barnes, Stephen L

    2014-02-01

    Surgical airway creation has a high potential for disaster. Conventional methods can be cumbersome and require special instruments. A simple method utilizing three steps and readily available equipment exists, but has yet to be adequately tested. Our objective was to compare conventional cricothyroidotomy with the three-step method utilizing high-fidelity simulation. Utilizing a high-fidelity simulator, 12 experienced flight nurses and paramedics performed both methods after a didactic lecture, simulator briefing, and demonstration of each technique. Six participants performed the three-step method first, and the remaining 6 performed the conventional method first. Each participant was filmed and timed. We analyzed videos with respect to the number of hand repositions, number of airway instrumentations, and technical complications. Times to successful completion were measured from incision to balloon inflation. The three-step method was completed faster (52.1 s vs. 87.3 s; p = 0.007) as compared with conventional surgical cricothyroidotomy. The two methods did not differ statistically regarding number of hand movements (3.75 vs. 5.25; p = 0.12) or instrumentations of the airway (1.08 vs. 1.33; p = 0.07). The three-step method resulted in 100% successful airway placement on the first attempt, compared with 75% of the conventional method (p = 0.11). Technical complications occurred more with the conventional method (33% vs. 0%; p = 0.05). The three-step method, using an elastic bougie with an endotracheal tube, was shown to require fewer total hand movements, took less time to complete, resulted in more successful airway placement, and had fewer complications compared with traditional cricothyroidotomy. Published by Elsevier Inc.

  14. Evaluation of Sub Query Performance in SQL Server

    NASA Astrophysics Data System (ADS)

    Oktavia, Tanty; Sujarwo, Surya

    2014-03-01

    The paper explores several sub query methods used in a query and their impact on the query performance. The study uses experimental approach to evaluate the performance of each sub query methods combined with indexing strategy. The sub query methods consist of in, exists, relational operator and relational operator combined with top operator. The experimental shows that using relational operator combined with indexing strategy in sub query has greater performance compared with using same method without indexing strategy and also other methods. In summary, for application that emphasized on the performance of retrieving data from database, it better to use relational operator combined with indexing strategy. This study is done on Microsoft SQL Server 2012.

  15. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  16. Interactive Learning in the Classroom: Is Student Response Method Related to Performance?

    ERIC Educational Resources Information Center

    Elicker, Joelle D.; McConnell, Nicole L.

    2011-01-01

    This study examined three methods of responding to in-class multiple-choice concept questions in an Introduction to Psychology course. Specifically, this study compared exam performance and student reactions using three methods of responding to concept questions: (a) a technology-based network system, (b) hand-held flashcards, and (c) hand…

  17. A Comparison of Two Scoring Methods for an Automated Speech Scoring System

    ERIC Educational Resources Information Center

    Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David

    2012-01-01

    This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…

  18. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    PubMed

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  19. [Comparative analysis of modification of Misgav-Ladach and Pfannenstiel methods for cesarean section in the material of Fetal-Maternal Clinical Department PMMH-RI between 1994-1999].

    PubMed

    Pawłowicz, P; Wilczyński, J; Stachowiak, G

    2000-04-01

    Comparative analysis of own modification of Misgav-Ladach (mML) and Pfannenstiel methods for caesarean section in the material of Fetal-Maternal Medicine Clinical Department PMMH-RI between 1994-99. Study group consists of 242 patients. In all women from this group we performed caesarean section using Misgav-Ladach method. Among all patients from control group counting 285 women we performed caesarean section applying Pfannenstiel method. To analyse clinical postoperative course in both groups we took account several parameters. Statistical analysis revealed that most of clinical postoperative course parameters was significantly better values in the study group we performed caesarean section using Misgav-Ladach method. The benefits of Misgav-Ladach method, with less pain post-operatively and quicker recovery, are all a by-product of doing the least harm during surgery and removing every unnecessary step. This method is appealing for its simplicity, ease of execution and its time-saving advantage.

  20. Determination of Urine Albumin by New Simple High-Performance Liquid Chromatography Method.

    PubMed

    Klapkova, Eva; Fortova, Magdalena; Prusa, Richard; Moravcova, Libuse; Kotaska, Karel

    2016-11-01

    A simple high-performance liquid chromatography (HPLC) method was developed for the determination of albumin in patients' urine samples without coeluting proteins and was compared with the immunoturbidimetric determination of albumin. Urine albumin is important biomarker in diabetic patients, but part of it is immuno-nonreactive. Albumin was determined by high-performance liquid chromatography (HPLC), UV detection at 280 nm, Zorbax 300SB-C3 column. Immunoturbidimetric analysis was performed using commercial kit on automatic biochemistry analyzer COBAS INTEGRA ® 400, Roche Diagnostics GmbH, Manheim, Germany. The HLPC method was fully validated. No significant interference with other proteins (transferrin, α-1-acid glycoprotein, α-1-antichymotrypsin, antitrypsin, hemopexin) was found. The results from 301 urine samples were compared with immunochemical determination. We found a statistically significant difference between these methods (P = 0.0001, Mann-Whitney test). New simple HPLC method was developed for the determination of urine albumin without coeluting proteins. Our data indicate that the HPLC method is highly specific and more sensitive than immunoturbidimetry. © 2016 Wiley Periodicals, Inc.

  1. Recursive feature selection with significant variables of support vectors.

    PubMed

    Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh

    2012-01-01

    The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.

  2. Speaker Linking and Applications using Non-Parametric Hashing Methods

    DTIC Science & Technology

    2016-09-08

    clustering method based on hashing—canopy- clustering . We apply this method to a large corpus of speaker recordings, demonstrate performance tradeoffs...and compare to other hash- ing methods. Index Terms: speaker recognition, clustering , hashing, locality sensitive hashing. 1. Introduction We assume...speaker in our corpus. Second, given a QBE method, how can we perform speaker clustering —each clustering should be a single speaker, and a cluster should

  3. A comparative study of ChIP-seq sequencing library preparation methods.

    PubMed

    Sundaram, Arvind Y M; Hughes, Timothy; Biondi, Shea; Bolduc, Nathalie; Bowman, Sarah K; Camilli, Andrew; Chew, Yap C; Couture, Catherine; Farmer, Andrew; Jerome, John P; Lazinski, David W; McUsic, Andrew; Peng, Xu; Shazand, Kamran; Xu, Feng; Lyle, Robert; Gilfillan, Gregor D

    2016-10-21

    ChIP-seq is the primary technique used to investigate genome-wide protein-DNA interactions. As part of this procedure, immunoprecipitated DNA must undergo "library preparation" to enable subsequent high-throughput sequencing. To facilitate the analysis of biopsy samples and rare cell populations, there has been a recent proliferation of methods allowing sequencing library preparation from low-input DNA amounts. However, little information exists on the relative merits, performance, comparability and biases inherent to these procedures. Notably, recently developed single-cell ChIP procedures employing microfluidics must also employ library preparation reagents to allow downstream sequencing. In this study, seven methods designed for low-input DNA/ChIP-seq sample preparation (Accel-NGS® 2S, Bowman-method, HTML-PCR, SeqPlex™, DNA SMART™, TELP and ThruPLEX®) were performed on five replicates of 1 ng and 0.1 ng input H3K4me3 ChIP material, and compared to a "gold standard" reference PCR-free dataset. The performance of each method was examined for the prevalence of unmappable reads, amplification-derived duplicate reads, reproducibility, and for the sensitivity and specificity of peak calling. We identified consistent high performance in a subset of the tested reagents, which should aid researchers in choosing the most appropriate reagents for their studies. Furthermore, we expect this work to drive future advances by identifying and encouraging use of the most promising methods and reagents. The results may also aid judgements on how comparable are existing datasets that have been prepared with different sample library preparation reagents.

  4. Comprehensive comparative analysis of 5'-end RNA-sequencing methods.

    PubMed

    Adiconis, Xian; Haber, Adam L; Simmons, Sean K; Levy Moonshine, Ami; Ji, Zhe; Busby, Michele A; Shi, Xi; Jacques, Justin; Lancaster, Madeline A; Pan, Jen Q; Regev, Aviv; Levin, Joshua Z

    2018-06-04

    Specialized RNA-seq methods are required to identify the 5' ends of transcripts, which are critical for studies of gene regulation, but these methods have not been systematically benchmarked. We directly compared six such methods, including the performance of five methods on a single human cellular RNA sample and a new spike-in RNA assay that helps circumvent challenges resulting from uncertainties in annotation and RNA processing. We found that the 'cap analysis of gene expression' (CAGE) method performed best for mRNA and that most of its unannotated peaks were supported by evidence from other genomic methods. We applied CAGE to eight brain-related samples and determined sample-specific transcription start site (TSS) usage, as well as a transcriptome-wide shift in TSS usage between fetal and adult brain.

  5. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  6. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  7. A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.

    PubMed

    Robitaille, Line; Hoffer, L John

    2016-04-21

    In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.

  8. Fuzzy decision-making framework for treatment selection based on the combined QUALIFLEX-TODIM method

    NASA Astrophysics Data System (ADS)

    Ji, Pu; Zhang, Hong-yu; Wang, Jian-qiang

    2017-10-01

    Treatment selection is a multi-criteria decision-making problem of significant concern in the medical field. In this study, a fuzzy decision-making framework is established for treatment selection. The framework mitigates information loss by introducing single-valued trapezoidal neutrosophic numbers to denote evaluation information. Treatment selection has multiple criteria that remarkably exceed the alternatives. In consideration of this characteristic, the framework utilises the idea of the qualitative flexible multiple criteria method. Furthermore, it considers the risk-averse behaviour of a decision maker by employing a concordance index based on TODIM (an acronym in Portuguese of interactive and multi-criteria decision-making) method. A sensitivity analysis is performed to illustrate the robustness of the framework. Finally, a comparative analysis is conducted to compare the framework with several extant methods. Results indicate the advantages of the framework and its better performance compared with the extant methods.

  9. Cross-linked polyvinyl alcohol films as alkaline battery separators

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.

    1983-01-01

    Cross-linking methods have been investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. Then pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide-zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.

  10. Cross-linked polyvinyl alcohol films as alkaline battery separators

    NASA Technical Reports Server (NTRS)

    Sheibley, D. W.; Manzo, M. A.; Gonzalez-Sanabria, O. D.

    1982-01-01

    Cross-linking methods were investigated to determine their effect on the performance of polyvinyl alcohol (PVA) films as alkaline battery separators. The following types of cross-linked PVA films are discussed: (1) PVA-dialdehyde blends post-treated with an acid or acid periodate solution (two-step method) and (2) PVA-dialdehyde blends cross-linked during film formation (drying) by using a reagent with both aldehyde and acid functionality (one-step method). Laboratory samples of each cross-linked type of film were prepared and evaluated in standard separator screening tests. The pilot-plant batches of films were prepared and compared to measure differences due to the cross-linking method. The pilot-plant materials were then tested in nickel oxide - zinc cells to compare the two methods with respect to performance characteristics and cycle life. Cell test results are compared with those from tests with Celgard.

  11. Towards scar-free surgery: An analysis of the increasing complexity from laparoscopic surgery to NOTES

    PubMed Central

    Chellali, Amine; Schwaitzberg, Steven D.; Jones, Daniel B.; Romanelli, John; Miller, Amie; Rattner, David; Roberts, Kurt E.; Cao, Caroline G.L.

    2014-01-01

    Background NOTES is an emerging technique for performing surgical procedures, such as cholecystectomy. Debate about its real benefit over the traditional laparoscopic technique is on-going. There have been several clinical studies comparing NOTES to conventional laparoscopic surgery. However, no work has been done to compare these techniques from a Human Factors perspective. This study presents a systematic analysis describing and comparing different existing NOTES methods to laparoscopic cholecystectomy. Methods Videos of endoscopic/laparoscopic views from fifteen live cholecystectomies were analyzed to conduct a detailed task analysis of the NOTES technique. A hierarchical task analysis of laparoscopic cholecystectomy and several hybrid transvaginal NOTES cholecystectomies was performed and validated by expert surgeons. To identify similarities and differences between these techniques, their hierarchical decomposition trees were compared. Finally, a timeline analysis was conducted to compare the steps and substeps. Results At least three variations of the NOTES technique were used for cholecystectomy. Differences between the observed techniques at the substep level of hierarchy and on the instruments being used were found. The timeline analysis showed an increase in time to perform some surgical steps and substeps in NOTES compared to laparoscopic cholecystectomy. Conclusion As pure NOTES is extremely difficult given the current state of development in instrumentation design, most surgeons utilize different hybrid methods – combination of endoscopic and laparoscopic instruments/optics. Results of our hierarchical task analysis yielded an identification of three different hybrid methods to perform cholecystectomy with significant variability amongst them. The varying degrees to which laparoscopic instruments are utilized to assist in NOTES methods appear to introduce different technical issues and additional tasks leading to an increase in the surgical time. The NOTES continuum of invasiveness is proposed here as a classification scheme for these methods, which was used to construct a clear roadmap for training and technology development. PMID:24902811

  12. Biases and power for groups comparison on subjective health measurements.

    PubMed

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald's test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.

  13. The Effect of Instructional Method on Cardiopulmonary Resuscitation Skill Performance: A Comparison Between Instructor-Led Basic Life Support and Computer-Based Basic Life Support With Voice-Activated Manikin.

    PubMed

    Wilson-Sands, Cathy; Brahn, Pamela; Graves, Kristal

    2015-01-01

    Validating participants' ability to correctly perform cardiopulmonary resuscitation (CPR) skills during basic life support courses can be a challenge for nursing professional development specialists. This study compares two methods of basic life support training, instructor-led and computer-based learning with voice-activated manikins, to identify if one method is more effective for performance of CPR skills. The findings suggest that a computer-based learning course with voice-activated manikins is a more effective method of training for improved CPR performance.

  14. The Effect of Instruction Method and Relearning on Dutch Spelling Performance of Third- through Fifth-Graders

    ERIC Educational Resources Information Center

    Bouwmeester, Samantha; Verkoeijen, Peter P. J. L.

    2011-01-01

    In this study, we compared two instruction methods on spelling performance: a rewriting instruction in which children repeatedly rewrote words and an ambiguous property instruction in which children deliberately practiced on a difficult word aspect. Moreover, we examined whether the testing effect applies to spelling performance. One hundred…

  15. Comparing the Effect of Thinking Maps Training Package Developed by the Thinking Maps Method on the Reading Performance of Dyslexic Students.

    PubMed

    Faramarzi, Salar; Moradi, Mohammadreza; Abedi, Ahmad

    2018-06-01

    The present study aimed to develop the thinking maps training package and compare its training effect with the thinking maps method on the reading performance of second and fifth grade of elementary school male dyslexic students. For this mixed method exploratory study, from among the above mentioned grades' students in Isfahan, 90 students who met the inclusion criteria were selected by multistage sampling and randomly assigned into six experimental and control groups. The data were collected by reading and dyslexia test and Wechsler Intelligence Scale for Children-fourth edition. The results of covariance analysis indicated a significant difference between the reading performance of the experimental (thinking maps training package and thinking maps method groups) and control groups ([Formula: see text]). Moreover, there were significant differences between the thinking maps training package group and thinking maps method group in some of the subtests ([Formula: see text]). It can be concluded that thinking maps training package and the thinking maps method exert a positive influence on the reading performance of dyslexic students; therefore, thinking maps can be used as an effective training and treatment method.

  16. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  17. Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images.

    PubMed

    Ruusuvuori, Pekka; Aijö, Tarmo; Chowdhury, Sharif; Garmendia-Torres, Cecilia; Selinummi, Jyrki; Birbaumer, Mirko; Dudley, Aimée M; Pelkmans, Lucas; Yli-Harja, Olli

    2010-05-13

    Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed. To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies. These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.

  18. A comparative study of multivariable robustness analysis methods as applied to integrated flight and propulsion control

    NASA Technical Reports Server (NTRS)

    Schierman, John D.; Lovell, T. A.; Schmidt, David K.

    1993-01-01

    Three multivariable robustness analysis methods are compared and contrasted. The focus of the analysis is on system stability and performance robustness to uncertainty in the coupling dynamics between two interacting subsystems. Of particular interest is interacting airframe and engine subsystems, and an example airframe/engine vehicle configuration is utilized in the demonstration of these approaches. The singular value (SV) and structured singular value (SSV) analysis methods are compared to a method especially well suited for analysis of robustness to uncertainties in subsystem interactions. This approach is referred to here as the interacting subsystem (IS) analysis method. This method has been used previously to analyze airframe/engine systems, emphasizing the study of stability robustness. However, performance robustness is also investigated here, and a new measure of allowable uncertainty for acceptable performance robustness is introduced. The IS methodology does not require plant uncertainty models to measure the robustness of the system, and is shown to yield valuable information regarding the effects of subsystem interactions. In contrast, the SV and SSV methods allow for the evaluation of the robustness of the system to particular models of uncertainty, and do not directly indicate how the airframe (engine) subsystem interacts with the engine (airframe) subsystem.

  19. Comparing the Methodologies in ASTM G198: Is There an Easy Way Out?

    Treesearch

    Samuel L. Zelinka

    2013-01-01

    ASTM(1) G198, Standard test method for determining the relative corrosion performance of driven fasteners in contact with treated wood, was accepted by consensus and published in 2011. The method has two different exposure conditions for determining fastener corrosion performance in treated wood. The first method places the wood and embedded...

  20. Lipidomic analysis of biological samples: Comparison of liquid chromatography, supercritical fluid chromatography and direct infusion mass spectrometry methods.

    PubMed

    Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal

    2017-11-24

    Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Comparative study on the measurement of learning outcomes after powerpoint presentation and problem based learning with discussion in family medicine amongst fifth year medical students.

    PubMed

    Khobragade, Sujata; Abas, Adinegara Lutfi; Khobragade, Yadneshwar Sudam

    2016-01-01

    Learning outcomes after traditional teaching methods were compared with problem-based learning (PBL) among fifth year medical students. Six students participated each in traditional teaching and PBL methods, respectively. Traditional teaching method involved PowerPoint (PPT) presentation and PBL included study on case scenario and discussion. Both methods were effective in improving performance of students. Postteaching, we did not find significant differences in learning outcomes between these two teaching methods. (1) Study was conducted with an intention to find out which method of learning is more effective; traditional or PBL. (2) To assess the level of knowledge and understanding in anemia/zoonotic diseases as against diabetes/hypertension. All the students posted from February 3, 2014, to March 14, 2014, participated in this study. Six students were asked to prepare and present a lecture (PPT) and subsequent week other six students were asked to present PBL. Both groups presented different topics. Since it was a pre- and post-test, same students were taken as control. To maintain uniformity and to avoid bias due cultural diversity, language etc., same questions were administered. After taking verbal consent, all 34 students were given pretest on anemia and zoonotic diseases. Then lecture (PPT) by six students on the same topic was given it followed by posttest questionnaire. Subsequent week pretest was conducted on hypertension and diabetes. Then case scenario presentation and discussion (PBL) was done by different six students followed by posttest. Both the methods were compared. Analysis was done manually and standard error of means and students t -test was used to find out statistical significance. We found statistically significant improvement in performance of students after PPT presentation as well as PBL. Both methods are equally effective. However, Pretest results of students in anemia and zoonotic diseases (Group A) were poor compared to pretest results of students in hypertension and diabetes (Group B). The students who participated in presentation did not influence their performance as they were covering a small part of the topic and there were no differences in their marks compared to other students. We did not find significant differences in outcome after teaching between PBL and traditional methods. Performances of students were poor in anemia and zoonotic diseases which need remedial teaching. Assessment may influence retention ability and performance.

  2. A Comparative Analysis of DBSCAN, K-Means, and Quadratic Variation Algorithms for Automatic Identification of Swallows from Swallowing Accelerometry Signals

    PubMed Central

    Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L

    2015-01-01

    Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505

  3. Comparing direct and iterative equation solvers in a large structural analysis software system

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  4. COMPARISON OF NONLINEAR DYNAMICS OPTIMIZATION METHODS FOR APS-U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y.; Borland, Michael

    Many different objectives and genetic algorithms have been proposed for storage ring nonlinear dynamics performance optimization. These optimization objectives include nonlinear chromaticities and driving/detuning terms, on-momentum and off-momentum dynamic acceptance, chromatic detuning, local momentum acceptance, variation of transverse invariant, Touschek lifetime, etc. In this paper, the effectiveness of several different optimization methods and objectives are compared for the nonlinear beam dynamics optimization of the Advanced Photon Source upgrade (APS-U) lattice. The optimized solutions from these different methods are preliminarily compared in terms of the dynamic acceptance, local momentum acceptance, chromatic detuning, and other performance measures.

  5. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  6. Comparison of genetic algorithms with conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  7. Comparing health system performance assessment and management approaches in the Netherlands and Ontario, Canada

    PubMed Central

    Tawfik-Shukor, Ali R; Klazinga, Niek S; Arah, Onyebuchi A

    2007-01-01

    Background Given the proliferation and the growing complexity of performance measurement initiatives in many health systems, the Netherlands and Ontario, Canada expressed interests in cross-national comparisons in an effort to promote knowledge transfer and best practise. To support this cross-national learning, a study was undertaken to compare health system performance approaches in The Netherlands with Ontario, Canada. Methods We explored the performance assessment framework and system of each constituency, the embeddedness of performance data in management and policy processes, and the interrelationships between the frameworks. Methods used included analysing governmental strategic planning and policy documents, literature and internet searches, comparative descriptive tables, and schematics. Data collection and analysis took place in Ontario and The Netherlands. A workshop to validate and discuss the findings was conducted in Toronto, adding important insights to the study. Results Both Ontario and The Netherlands conceive health system performance within supportive frameworks. However they differ in their assessment approaches. Ontario's Scorecard links performance measurement with strategy, aimed at health system integration. The Dutch Health Care Performance Report (Zorgbalans) does not explicitly link performance with strategy, and focuses on the technical quality of healthcare by measuring dimensions of quality, access, and cost against healthcare needs. A backbone 'five diamond' framework maps both frameworks and articulates the interrelations and overlap between their goals, themes, dimensions and indicators. The workshop yielded more contextual insights and further validated the comparative values of each constituency's performance assessment system. Conclusion To compare the health system performance approaches between The Netherlands and Ontario, Canada, several important conceptual and contextual issues must be addressed, before even attempting any future content comparisons and benchmarking. Such issues would lend relevant interpretational credibility to international comparative assessments of the two health systems. PMID:17319947

  8. Measuring the performance of livability programs.

    DOT National Transportation Integrated Search

    2013-07-01

    This report analyzes the performance measurement processes adopted by five large livability programs throughout the United States. It compares and contrasts these programs by examining existing research in performance measurement methods. The ...

  9. Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach

    NASA Astrophysics Data System (ADS)

    Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.

    2018-04-01

    Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.

  10. Comparing the performance of biomedical clustering methods.

    PubMed

    Wiwie, Christian; Baumbach, Jan; Röttger, Richard

    2015-11-01

    Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future. This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide-ranging comparison we were able to develop a short guideline for biomedical clustering tasks. ClustEval allows biomedical researchers to pick the appropriate tool for their data type and allows method developers to compare their tool to the state of the art.

  11. Midstory hardwood species respond differently to chainsaw girdle method and herbicide treatment

    Treesearch

    Ronald A. Rathfon; Michael R. Saunders

    2013-01-01

    Foresters in the Central Hardwoods Region commonly fell or girdle interfering trees and apply herbicide to the cut surface when performing intermediate silvicultural treatments. The objective of this study was to compare the use of single and double chainsaw girdle methods in combination with a herbicide treatment and, within the double girdle method, compare herbicide...

  12. Comparison study of two procedures for the determination of emamectin benzoate in medicated fish feed.

    PubMed

    Farer, Leslie J; Hayes, John M

    2005-01-01

    A new method has been developed for the determination of emamectin benzoate in fish feed. The method uses a wet extraction, cleanup by solid-phase extraction, and quantitation and separation by liquid chromatography (LC). In this paper, we compare the performance of this method with that of a previously reported LC assay for the determination of emamectin benzoate in fish feed. Although similar to the previous method, the new procedure uses a different sample pretreatment, wet extraction, and quantitation method. The performance of the new method was compared with that of the previously reported method by analyses of 22 medicated feed samples from various commercial sources. A comparison of the results presented here reveals slightly lower assay values obtained with the new method. Although a paired sample t-test indicates the difference in results is significant, this difference is within the method precision of either procedure.

  13. Testing and Validation of Computational Methods for Mass Spectrometry.

    PubMed

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  14. Development of the Likelihood of Low Glucose (LLG) algorithm for evaluating risk of hypoglycemia: a new approach for using continuous glucose data to guide therapeutic decision making.

    PubMed

    Dunn, Timothy C; Hayter, Gary A; Doniger, Ken J; Wolpert, Howard A

    2014-07-01

    The objective was to develop an analysis methodology for generating diabetes therapy decision guidance using continuous glucose (CG) data. The novel Likelihood of Low Glucose (LLG) methodology, which exploits the relationship between glucose median, glucose variability, and hypoglycemia risk, is mathematically based and can be implemented in computer software. Using JDRF Continuous Glucose Monitoring Clinical Trial data, CG values for all participants were divided into 4-week periods starting at the first available sensor reading. The safety and sensitivity performance regarding hypoglycemia guidance "stoplights" were compared between the LLG method and one based on 10th percentile (P10) values. Examining 13 932 hypoglycemia guidance outputs, the safety performance of the LLG method ranged from 0.5% to 5.4% incorrect "green" indicators, compared with 0.9% to 6.0% for P10 value of 110 mg/dL. Guidance with lower P10 values yielded higher rates of incorrect indicators, such as 11.7% to 38% at 80 mg/dL. When evaluated only for periods of higher glucose (median above 155 mg/dL), the safety performance of the LLG method was superior to the P10 method. Sensitivity performance of correct "red" indicators of the LLG method had an in sample rate of 88.3% and an out of sample rate of 59.6%, comparable with the P10 method up to about 80 mg/dL. To aid in therapeutic decision making, we developed an algorithm-supported report that graphically highlights low glucose risk and increased variability. When tested with clinical data, the proposed method demonstrated equivalent or superior safety and sensitivity performance. © 2014 Diabetes Technology Society.

  15. Quantitative Evaluation of Automated Skull-Stripping Methods Applied to Contemporary and Legacy Images: Effects of Diagnosis, Bias Correction, and Slice Location

    PubMed Central

    Fennema-Notestine, Christine; Ozyurt, I. Burak; Clark, Camellia P.; Morris, Shaunna; Bischoff-Grethe, Amanda; Bondi, Mark W.; Jernigan, Terry L.; Fischl, Bruce; Segonne, Florent; Shattuck, David W.; Leahy, Richard M.; Rex, David E.; Toga, Arthur W.; Zou, Kelly H.; BIRN, Morphometry; Brown, Gregory G.

    2008-01-01

    Performance of automated methods to isolate brain from nonbrain tissues in magnetic resonance (MR) structural images may be influenced by MR signal inhomogeneities, type of MR image set, regional anatomy, and age and diagnosis of subjects studied. The present study compared the performance of four methods: Brain Extraction Tool (BET; Smith [2002]: Hum Brain Mapp 17:143–155); 3dIntracranial (Ward [1999] Milwaukee: Biophysics Research Institute, Medical College of Wisconsin; in AFNI); a Hybrid Watershed algorithm (HWA, Segonne et al. [2004] Neuroimage 22:1060–1075; in FreeSurfer); and Brain Surface Extractor (BSE, Sandor and Leahy [1997] IEEE Trans Med Imag 16:41–54; Shattuck et al. [2001] Neuroimage 13:856 – 876) to manually stripped images. The methods were applied to uncorrected and bias-corrected datasets; Legacy and Contemporary T1-weighted image sets; and four diagnostic groups (depressed, Alzheimer’s, young and elderly control). To provide a criterion for outcome assessment, two experts manually stripped six sagittal sections for each dataset in locations where brain and nonbrain tissue are difficult to distinguish. Methods were compared on Jaccard similarity coefficients, Hausdorff distances, and an Expectation-Maximization algorithm. Methods tended to perform better on contemporary datasets; bias correction did not significantly improve method performance. Mesial sections were most difficult for all methods. Although AD image sets were most difficult to strip, HWA and BSE were more robust across diagnostic groups compared with 3dIntracranial and BET. With respect to specificity, BSE tended to perform best across all groups, whereas HWA was more sensitive than other methods. The results of this study may direct users towards a method appropriate to their T1-weighted datasets and improve the efficiency of processing for large, multisite neuroimaging studies. PMID:15986433

  16. Timing performance comparison of digital methods in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Aykac, Mehmet; Hong, Inki; Cho, Sanghee

    2010-11-01

    Accurate timing information is essential in positron emission tomography (PET). Recent improvements in high speed electronics made digital methods more attractive to find alternative solutions to create a time mark for an event. Two new digital methods (mean PMT pulse model, MPPM, and median filtered zero crossing method, MFZCM) were introduced in this work and compared to traditional methods such as digital leading edge (LE) and digital constant fraction discrimination (CFD). In addition, the performances of all four digital methods were compared to analog based LE and CFD. The time resolution values for MPPM and MFZCM were measured below 300 ps at 1.6 GS/s and above that was similar to the analog based coincidence timing results. In addition, the two digital methods were insensitive to the changes in threshold setting that might give some improvement in system dead time.

  17. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    PubMed

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  18. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  19. A comparative study on methods of improving SCR for ship detection in SAR image

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Shi, Hongji; Tao, Yunhong; Ma, Li

    2017-10-01

    Knowledge about ship positions plays a critical role in a wide range of maritime applications. To improve the performance of ship detector in SAR image, an effective strategy is improving the signal-to-clutter ratio (SCR) before conducting detection. In this paper, we present a comparative study on methods of improving SCR, including power-law scaling (PLS), max-mean and max-median filter (MMF1 and MMF2), method of wavelet transform (TWT), traditional SPAN detector, reflection symmetric metric (RSM), scattering mechanism metric (SMM). The ability of SCR improvement to SAR image and ship detection performance associated with cell- averaging CFAR (CA-CFAR) of different methods are evaluated on two real SAR data.

  20. Behind the Final Grade in Hybrid v. Traditional Courses: Comparing Student Performance by Assessment Type, Core Competency, and Course Objective

    ERIC Educational Resources Information Center

    Bain, Lisa Z.

    2012-01-01

    There are many different delivery methods used by institutions of higher education. These include traditional, hybrid, and online course offerings. The comparisons of these typically use final grade as the measure of student performance. This research study looks behind the final grade and compares student performance by assessment type, core…

  1. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  2. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  3. A simple performance calculation method for LH2/LOX engines with different power cycles

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1973-01-01

    A simple method for the calculation of the specific impulse of an engine with a gas generator cycle is presented. The solution is obtained by a power balance between turbine and pump. Approximate equations for the performance of the combustion products of LH2/LOX are derived. Performance results are compared with solutions of different engine types.

  4. Use of the landmark method to address immortal person-time bias in comparative effectiveness research: a simulation study.

    PubMed

    Mi, Xiaojuan; Hammill, Bradley G; Curtis, Lesley H; Lai, Edward Chia-Cheng; Setoguchi, Soko

    2016-11-20

    Observational comparative effectiveness and safety studies are often subject to immortal person-time, a period of follow-up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel-Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel-Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person-time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow-up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Interactive laboratory classes enhance neurophysiological knowledge in Thai medical students.

    PubMed

    Wongjarupong, Nicha; Niyomnaitham, Danai; Vilaisaktipakorn, Pitchamol; Suksiriworaboot, Tanawin; Qureshi, Shaun Peter; Bongsebandhu-Phubhakdi, Saknan

    2018-03-01

    Interactive laboratory class (ILC) is a two-way communication teaching method that encourages students to correlate laboratory findings with materials from lectures. In Thai medical education, active learning methods are uncommon. This paper aims to establish 1) if ILCs would effectively promote physiology learning; 2) if effectiveness would be found in both previously academically high-performing and low-performing students; and 3) the acceptability of ILCs to Thai medical students as a novel learning method. Two hundred seventy-eight second-year medical students were recruited to this study. We conducted three ILC sessions, which followed corresponding lectures. We carried out multiple-choice pre- and post-ILC assessments of knowledge and compared by repeated-measures ANOVA and unpaired t-test. Subgroup analysis was performed to compare high-performance (HighP) and low-performance (LowP) students. After the ILCs, participants self-rated their knowledge and satisfaction. Post-ILC test scores increased significantly compared with pre-ILC test scores in all three sessions. Mean scores of each post-ILC test increased significantly from pre-ILC test in both LowP and HighP groups. More students self-reported a "very high" and "high" level of knowledge after ILCs. Most students agreed that ILCs provided more discussion opportunity, motivated their learning, and made lessons more enjoyable. As an adjunct to lectures, ILCs can enhance knowledge in medical students, regardless of previous academic performance. Students perceived ILC as useful and acceptable. This study supports the active learning methods in physiology education, regardless of cultural context.

  6. Performance evaluation of the zero-multipole summation method in modern molecular dynamics software.

    PubMed

    Sakuraba, Shun; Fukuda, Ikuo

    2018-05-04

    The zero-multiple summation method (ZMM) is a cutoff-based method for calculating electrostatic interactions in molecular dynamics simulations, utilizing an electrostatic neutralization principle as a physical basis. Since the accuracies of the ZMM have been revealed to be sufficient in previous studies, it is highly desirable to clarify its practical performance. In this paper, the performance of the ZMM is compared with that of the smooth particle mesh Ewald method (SPME), where the both methods are implemented in molecular dynamics software package GROMACS. Extensive performance comparisons against a highly optimized, parameter-tuned SPME implementation are performed for various-sized water systems and two protein-water systems. We analyze in detail the dependence of the performance on the potential parameters and the number of CPU cores. Even though the ZMM uses a larger cutoff distance than the SPME does, the performance of the ZMM is comparable to or better than that of the SPME. This is because the ZMM does not require a time-consuming electrostatic convolution and because the ZMM gains short neighbor-list distances due to the smooth damping feature of the pairwise potential function near the cutoff length. We found, in particular, that the ZMM with quadrupole or octupole cancellation and no damping factor is an excellent candidate for the fast calculation of electrostatic interactions. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  7. Evapotranspiration Calculations for an Alpine Marsh Meadow Site in Three-river Headwater Region

    NASA Astrophysics Data System (ADS)

    Zhou, B.; Xiao, H.

    2016-12-01

    Daily radiation and meteorological data were collected at an alpine marsh meadow site in the Three-river Headwater Region(THR). Use them to assess radiation models determined after comparing the performance between Zuo model and the model recommend by FAO56P-M.Four methods, FAO56P-M, Priestley-Taylor, Hargreaves, and Makkink methods were applied to determine daily reference evapotranspiration( ETr) for the growing season and built the empirical models for estimating daily actual evapotranspiration ETa between ETr derived from the four methods and evapotranspiration derived from Bowen Ratio method on alpine marsh meadow in this region. After comparing the performance of four empirical models by RMSE, MAE and AI, it showed these models all can get the better estimated daily ETaon alpine marsh meadow in this region, and the best performance of the FAO56 P-M, Makkink empirical model were better than Priestley-Taylor and Hargreaves model.

  8. Boosting instance prototypes to detect local dermoscopic features.

    PubMed

    Situ, Ning; Yuan, Xiaojing; Zouridakis, George

    2010-01-01

    Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.

  9. Matching weights to simultaneously compare three treatment groups: Comparison to three-way matching

    PubMed Central

    Yoshida, Kazuki; Hernández-Díaz, Sonia; Solomon, Daniel H.; Jackson, John W.; Gagne, Joshua J.; Glynn, Robert J.; Franklin, Jessica M.

    2017-01-01

    BACKGROUND Propensity score matching is a commonly used tool. However, its use in settings with more than two treatment groups has been less frequent. We examined the performance of a recently developed propensity score weighting method in the three treatment group setting. METHODS The matching weight method is an extension of inverse probability of treatment weighting (IPTW) that reweights both exposed and unexposed groups to emulate a propensity score matched population. Matching weights can generalize to multiple treatment groups. The performance of matching weights in the three-group setting was compared via simulation to three-way 1:1:1 propensity score matching and IPTW. We also applied these methods to an empirical example that compared the safety of three analgesics. RESULTS Matching weights had similar bias, but better mean squared error (MSE) compared to three-way matching in all scenarios. The benefits were more pronounced in scenarios with a rare outcome, unequally sized treatment groups, or poor covariate overlap. IPTW’s performance was highly dependent on covariate overlap. In the empirical example, matching weights achieved the best balance for 24 out of 35 covariates. Hazard ratios were numerically similar to matching. However, the confidence intervals were narrower for matching weights. CONCLUSIONS Matching weights demonstrated improved performance over three-way matching in terms of MSE, particularly in simulation scenarios where finding matched subjects was difficult. Given its natural extension to settings with even more than three groups, we recommend matching weights for comparing outcomes across multiple treatment groups, particularly in settings with rare outcomes or unequal exposure distributions. PMID:28151746

  10. Comparative evaluation of fluorescent in situ hybridization and Giemsa microscopy with quantitative real-time PCR technique in detecting malaria parasites in a holoendemic region of Kenya.

    PubMed

    Osoga, Joseph; Waitumbi, John; Guyah, Bernard; Sande, James; Arima, Cornel; Ayaya, Michael; Moseti, Caroline; Morang'a, Collins; Wahome, Martin; Achilla, Rachel; Awinda, George; Nyakoe, Nancy; Wanja, Elizabeth

    2017-07-24

    Early and accurate diagnosis of malaria is important in treatment as well as in the clinical evaluation of drugs and vaccines. Evaluation of Giemsa-stained smears remains the gold standard for malaria diagnosis, although diagnostic errors and potential bias estimates of protective efficacy have been reported in practice. Plasmodium genus fluorescent in situ hybridization (P-Genus FISH) is a microscopy-based method that uses fluorescent labelled oligonucleotide probes targeted to pathogen specific ribosomal RNA fragments to detect malaria parasites in whole blood. This study sought to evaluate the diagnostic performance of P-Genus FISH alongside Giemsa microscopy compared to quantitative reverse transcription polymerase chain reaction (qRT-PCR) in a clinical setting. Five hundred study participants were recruited prospectively and screened for Plasmodium parasites by P-Genus FISH assay, and Giemsa microscopy. The microscopic methods were performed by two trained personnel and were blinded, and if the results were discordant a third reading was performed as a tie breaker. The diagnostic performance of both methods was evaluated against qRT-PCR as a more sensitive method. The number of Plasmodium positive cases was 26.8% by P-Genus FISH, 33.2% by Giemsa microscopy, and 51.2% by qRT-PCR. The three methods had 46.8% concordant results with 61 positive cases and 173 negative cases. Compared to qRT-PCR the sensitivity and specificity of P-Genus FISH assay was 29.3 and 75.8%, respectively, while microscopy had 58.2 and 93.0% respectively. Microscopy had a higher positive and negative predictive values (89.8 and 68.0% respectively) compared to P-Genus FISH (56.0 and 50.5%). In overall, microscopy had a good measure of agreement (76%, k = 0.51) compared to P-Genus FISH (52%, k = 0.05). The diagnostic performance of P-Genus FISH was shown to be inferior to Giemsa microscopy in the clinical samples. This hinders the possible application of the method in the field despite the many advantages of the method especially diagnosis of low parasite density infections. The P-Genus assay has great potential but application of the method in clinical setting would rely on extensive training of microscopist and continuous proficiency testing.

  11. Biases and Power for Groups Comparison on Subjective Health Measurements

    PubMed Central

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald’s test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative. PMID:23115620

  12. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  13. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  14. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  15. The model of flood control using servqual method and importance performance analysis in Surakarta City – Indonesia

    NASA Astrophysics Data System (ADS)

    Titi Purwantini, V.; Sutanto, Yusuf

    2018-05-01

    This research is to create a model of flood control in the city of Surakarta using Servqual method and Importance Performance Analysis. Service quality is generally defined as the overall assessment of a service by the customersor the extent to which a service meets customer’s needs or expectations. The purpose of this study is to find the first model of flood control that is appropriate to the condition of the community. Surakarta This means looking for a model that can provide satisfactory service for the people of Surakarta who are in the location of the flood. The second is to find the right model to improve service performance of Surakarta City Government in serving the people in flood location. The method used to determine the satisfaction of the public on the quality of service is to see the difference in the quality of service expected by the community with the reality. This method is Servqual Method While to assess the performance of city government officials is by comparing the actual performance with the quality of services provided, this method is This means looking for a model that can provide satisfactory service for the people of Surakarta who are in the location of the flood.The second is to find the right model to improve service performance of Surakarta City Government in serving the people in flood location. The method used to determine the satisfaction of the public on the quality of service is to see the difference in the quality of service expected by the community with the reality. This method is Servqual Method While to assess the performance of city government officials is by comparing the actual performance with the quality of services provided, this method is Importance Performance Analysis. Samples were people living in flooded areas in the city of Surakarta. Result this research is Satisfaction = Responsiveness+ Realibility + Assurance + Empathy+ Tangible (Servqual Model) and Importance Performance Analysis is From Cartesian diagram can be made Flood Control Formula as follow: Food Control = High performance

  16. Validation of a simple method for predicting the disinfection performance in a flow-through contactor.

    PubMed

    Pfeiffer, Valentin; Barbeau, Benoit

    2014-02-01

    Despite its shortcomings, the T10 method introduced by the United States Environmental Protection Agency (USEPA) in 1989 is currently the method most frequently used in North America to calculate disinfection performance. Other methods (e.g., the Integrated Disinfection Design Framework, IDDF) have been advanced as replacements, and more recently, the USEPA suggested the Extended T10 and Extended CSTR (Continuous Stirred-Tank Reactor) methods to improve the inactivation calculations within ozone contactors. To develop a method that fully considers the hydraulic behavior of the contactor, two models (Plug Flow with Dispersion and N-CSTR) were successfully fitted with five tracer tests results derived from four Water Treatment Plants and a pilot-scale contactor. A new method based on the N-CSTR model was defined as the Partially Segregated (Pseg) method. The predictions from all the methods mentioned were compared under conditions of poor and good hydraulic performance, low and high disinfectant decay, and different levels of inactivation. These methods were also compared with experimental results from a chlorine pilot-scale contactor used for Escherichia coli inactivation. The T10 and Extended T10 methods led to large over- and under-estimations. The Segregated Flow Analysis (used in the IDDF) also considerably overestimated the inactivation under high disinfectant decay. Only the Extended CSTR and Pseg methods produced realistic and conservative predictions in all cases. Finally, a simple implementation procedure of the Pseg method was suggested for calculation of disinfection performance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Using Classification and Regression Trees (CART) and random forests to analyze attrition: Results from two simulations.

    PubMed

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J

    2015-12-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. (c) 2015 APA, all rights reserved).

  18. Using Classification and Regression Trees (CART) and Random Forests to Analyze Attrition: Results From Two Simulations

    PubMed Central

    Hayes, Timothy; Usami, Satoshi; Jacobucci, Ross; McArdle, John J.

    2016-01-01

    In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers. PMID:26389526

  19. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Chengguang; Drinkwater, Bruce W.

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less

  20. Measuring Link-Resolver Success: Comparing 360 Link with a Local Implementation of WebBridge

    ERIC Educational Resources Information Center

    Herrera, Gail

    2011-01-01

    This study reviewed link resolver success comparing 360 Link and a local implementation of WebBridge. Two methods were used: (1) comparing article-level access and (2) examining technical issues for 384 randomly sampled OpenURLs. Google Analytics was used to collect user-generated OpenURLs. For both methods, 360 Link out-performed the local…

  1. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  2. Level of functional capacities following soccer-specific warm-up methods among elite collegiate soccer players.

    PubMed

    Vazini Taher, Amir; Parnow, Abdolhossein

    2017-05-01

    Different methods of warm-up may have implications in improving various aspects of soccer performance. The present study aimed to investigate acute effects of soccer specific warm-up protocols on functional performance tests. This study using randomized within-subject design, investigated the performance of 22 collegiate elite soccer player following soccer specific warm-ups using dynamic stretching, static stretching, and FIFA 11+ program. Post warm-up examinations consisted: 1) Illinois Agility Test; 2) vertical jump; 3) 30 meter sprint; 4) consecutive turns; 5) flexibility of knee. Vertical jump performance was significantly lower following static stretching, as compared to dynamic stretching (P=0.005). Sprint performance declined significantly following static stretching as compared to FIFA 11+ (P=0.023). Agility time was significantly faster following dynamic stretching as compared to FIFA 11+ (P=0.001) and static stretching (P=0.001). Knee flexibility scores were significantly improved following the static stretching as compared to dynamic stretching (P=016). No significant difference was observed for consecutive turns between three warm-up protocol. The present finding showed that a soccer specific warm-up protocol relied on dynamic stretching is preferable in enhancing performance as compared to protocols relying on static stretches and FIFA 11+ program. Investigators suggest that while different soccer specific warm-up protocols have varied types of effects on performance, acute effects of dynamic stretching on performance in elite soccer players are assured, however application of static stretching in reducing muscle stiffness is demonstrated.

  3. Image Retrieval using Integrated Features of Binary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Agarwal, Megha; Maheshwari, R. P.

    2011-12-01

    In this paper a new approach for image retrieval is proposed with the application of binary wavelet transform. This new approach facilitates the feature calculation with the integration of histogram and correlogram features extracted from binary wavelet subbands. Experiments are performed to evaluate and compare the performance of proposed method with the published literature. It is verified that average precision and average recall of proposed method (69.19%, 41.78%) is significantly improved compared to optimal quantized wavelet correlogram (OQWC) [6] (64.3%, 38.00%) and Gabor wavelet correlogram (GWC) [10] (64.1%, 40.6%). All the experiments are performed on Corel 1000 natural image database [20].

  4. Seventy-meter antenna performance predictions: GTD analysis compared with traditional ray-tracing methods

    NASA Technical Reports Server (NTRS)

    Schredder, J. M.

    1988-01-01

    A comparative analysis was performed, using both the Geometrical Theory of Diffraction (GTD) and traditional pathlength error analysis techniques, for predicting RF antenna gain performance and pointing corrections. The NASA/JPL 70 meter antenna with its shaped surface was analyzed for gravity loading over the range of elevation angles. Also analyzed were the effects of lateral and axial displacements of the subreflector. Significant differences were noted between the predictions of the two methods, in the effect of subreflector displacements, and in the optimal subreflector positions to focus a gravity-deformed main reflector. The results are of relevance to future design procedure.

  5. Decomposing Achievement Gaps among OECD Countries

    ERIC Educational Resources Information Center

    Zhang, Liang; Lee, Kristen A.

    2011-01-01

    In this study, we use decomposition methods on PISA 2006 data to compare student academic performance across OECD countries. We first establish an empirical model to explain the variation in academic performance across individuals, and then use the Oaxaca-Blinder decomposition method to decompose the achievement gap between each of the OECD…

  6. Systematic Comparison of Label-Free, Metabolic Labeling, and Isobaric Chemical Labeling for Quantitative Proteomics on LTQ Orbitrap Velos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhou; Adams, Rachel M; Chourey, Karuna

    2012-01-01

    A variety of quantitative proteomics methods have been developed, including label-free, metabolic labeling, and isobaric chemical labeling using iTRAQ or TMT. Here, these methods were compared in terms of the depth of proteome coverage, quantification accuracy, precision, and reproducibility using a high-performance hybrid mass spectrometer, LTQ Orbitrap Velos. Our results show that (1) the spectral counting method provides the deepest proteome coverage for identification, but its quantification performance is worse than labeling-based approaches, especially the quantification reproducibility; (2) metabolic labeling and isobaric chemical labeling are capable of accurate, precise, and reproducible quantification and provide deep proteome coverage for quantification. Isobaricmore » chemical labeling surpasses metabolic labeling in terms of quantification precision and reproducibility; (3) iTRAQ and TMT perform similarly in all aspects compared in the current study using a CID-HCD dual scan configuration. Based on the unique advantages of each method, we provide guidance for selection of the appropriate method for a quantitative proteomics study.« less

  7. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  8. Comparative effectiveness of instructional methods: oral and pharyngeal cancer examination.

    PubMed

    Clark, Nereyda P; Marks, John G; Sandow, Pamela R; Seleski, Christine E; Logan, Henrietta L

    2014-04-01

    This study compared the effectiveness of different methods of instruction for the oral and pharyngeal cancer examination. A group of thirty sophomore students at the University of Florida College of Dentistry were randomly assigned to three training groups: video instruction, a faculty-led hands-on instruction, or both video and hands-on instruction. The training intervention involved attending two sessions spaced two weeks apart. The first session used a pretest to assess students' baseline didactic knowledge and clinical examination technique. The second session utilized two posttests to assess the comparative effectiveness of the training methods on didactic knowledge and clinical technique. The key findings were that students performed the clinical examination significantly better with the combination of video and faculty-led hands-on instruction (p<0.01). All students improved their clinical exam skills, knowledge, and confidence in performing the oral and pharyngeal cancer examination independent of which training group they were assigned. Utilizing both video and interactive practice promoted greater performance of the clinical technique on the oral and pharyngeal cancer examination.

  9. Comparison of adaptive critic-based and classical wide-area controllers for power systems.

    PubMed

    Ray, Swakshar; Venayagamoorthy, Ganesh Kumar; Chaudhuri, Balarko; Majumder, Rajat

    2008-08-01

    An adaptive critic design (ACD)-based damping controller is developed for a thyristor-controlled series capacitor (TCSC) installed in a power system with multiple poorly damped interarea modes. The performance of this ACD computational intelligence-based method is compared with two classical techniques, which are observer-based state-feedback (SF) control and linear matrix inequality LMI-H(infinity) robust control. Remote measurements are used as feedback signals to the wide-area damping controller for modulating the compensation of the TCSC. The classical methods use a linearized model of the system whereas the ACD method is purely measurement-based, leading to a nonlinear controller with fixed parameters. A comparative analysis of the controllers' performances is carried out under different disturbance scenarios. The ACD-based design has shown promising performance with very little knowledge of the system compared to classical model-based controllers. This paper also discusses the advantages and disadvantages of ACDs, SF, and LMI-H(infinity).

  10. Comparative field permeability measurement of permeable pavements using ASTM C1701 and NCAT permeameter methods.

    PubMed

    Li, Hui; Kayhanian, Masoud; Harvey, John T

    2013-03-30

    Fully permeable pavement is gradually gaining support as an alternative best management practice (BMP) for stormwater runoff management. As the use of these pavements increases, a definitive test method is needed to measure hydraulic performance and to evaluate clogging, both for performance studies and for assessment of permeability for construction quality assurance and maintenance needs assessment. Two of the most commonly used permeability measurement tests for porous asphalt and pervious concrete are the National Center for Asphalt Technology (NCAT) permeameter and ASTM C1701, respectively. This study was undertaken to compare measured values for both methods in the field on a variety of permeable pavements used in current practice. The field measurements were performed using six experimental section designs with different permeable pavement surface types including pervious concrete, porous asphalt and permeable interlocking concrete pavers. Multiple measurements were performed at five locations on each pavement test section. The results showed that: (i) silicone gel is a superior sealing material to prevent water leakage compared with conventional plumbing putty; (ii) both methods (NCAT and ASTM) can effectively be used to measure the permeability of all pavement types and the surface material type will not impact the measurement precision; (iii) the permeability values measured with the ASTM method were 50-90% (75% on average) lower than those measured with the NCAT method; (iv) the larger permeameter cylinder diameter used in the ASTM method improved the reliability and reduced the variability of the measured permeability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Performance of the Effective Core Potentials of Ca, Hg and Pb in Complexes with Ligands Containing N and O Donor Atoms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramirez, Jose Z.; Vargas, Rubicelia; Garza, Jorge

    This paper presents a systematic study of the performance of the relativistic effective core potentials (RECPs) proposed by Stoll-Preuss, Christiansen-Ermler and Hay-Wadt for Ca2+, Hg2+ and Pb2+. The RECPs performance is studied when these cations are combined with ethylene glycol, 2-aminoethanol and ethylenediamine to form bidentate complexes. First, the description of the bidentate ligands is analyzed with the Kohn-Sham method by using SVWN, BLYP and B3LYP exchange-correlation functionals and they are compared with the Moeller-Plesset perturbation theory (MP2), for all these methods the TZVP basis set was used. We found that the BLYP exchange-correlation functional gives similar results that thosemore » obtained by the B3LYP and MP2 methods. Thus, the bidentate metal complexes were studied with the BLYP method combined with the RECPs. In order to compare RECPs performance, all the systems considered in this work were studied with the relativistic all-electron Douglas-Kroll (DK3) method. We observed that the Christiansen-Ermler RECPs give the best energetic and geometrical description for Ca and Hg complexes when compared with the all-electron method. For Pb complexes the spin-orbit interaction and Basis Set Superposition error must be taken into account in the RECP. In general, the trend showed in the complexation energies with the all-electron method is followed by the complexation energies computed with all the pseudopotential tested in this work. Battelle operates PNNL for the USDOE.« less

  12. Input respiratory impedance in mice: comparison between the flow-based and the wavetube method to perform the forced oscillation technique.

    PubMed

    Mori, V; Oliveira, M A; Vargas, M H M; da Cunha, A A; de Souza, R G; Pitrez, P M; Moriya, H T

    2017-06-01

    Objective and approach: In this study, we estimated the constant phase model (CPM) parameters from the respiratory impedance of male BALB/c mice by performing the forced oscillation technique (FOT) in a control group (n  =  8) and in a murine model of asthma (OVA) (n  =  10). Then, we compared the results obtained by two different methods, using a commercial equipment (flexiVent-flexiWare 7.X; SCIREQ, Montreal, Canada) (FXV) and a wavetube method equipment (Sly et al 2003 J. Appl. Physiol. 94 1460-6) (WVT). We believe that the results from different methods may not be comparable. First, we compared the results performing a two-way analysis of variance (ANOVA) for the resistance, elastance and tissue damping. We found statistically significant differences in all CPM parameters, except for resistance, when comparing Control and OVA groups. When comparing devices, we found statistically significant differences in resistance, while differences in elastance were not observed. For tissue damping, the results from WVT were observed to be higher than those from FXV. Finally, when comparing the relative variation between the CPM parameters of the Control and OVA groups in both devices, no significant differences were observed for all parameters. We then conclude that this assessment can compensate the effect of using different cannulas. Furthermore, tissue damping differences between groups can be compensated, since bronchoconstrictors were not used. Therefore, we believe that relative variations in the results between groups can be a comparing parameter when using different equipment without bronchoconstrictor administration.

  13. Acting Irrationally to Improve Performance in Stochastic Worlds

    NASA Astrophysics Data System (ADS)

    Belavkin, Roman V.

    Despite many theories and algorithms for decision-making, after estimating the utility function the choice is usually made by maximising its expected value (the max EU principle). This traditional and 'rational' conclusion of the decision-making process is compared in this paper with several 'irrational' techniques that make choice in Monte-Carlo fashion. The comparison is made by evaluating the performance of simple decision-theoretic agents in stochastic environments. It is shown that not only the random choice strategies can achieve performance comparable to the max EU method, but under certain conditions the Monte-Carlo choice methods perform almost two times better than the max EU. The paper concludes by quoting evidence from recent cognitive modelling works as well as the famous decision-making paradoxes.

  14. Study on multiple-hops performance of MOOC sequences-based optical labels for OPS networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun; Ma, Chunli

    2009-11-01

    In this paper, we utilize a new study method that is under independent case of multiple optical orthogonal codes to derive the probability function of MOOCS-OPS networks, discuss the performance characteristics for a variety of parameters, and compare some characteristics of the system employed by single optical orthogonal code or multiple optical orthogonal codes sequences-based optical labels. The performance of the system is also calculated, and our results verify that the method is effective. Additionally it is found that performance of MOOCS-OPS networks would, negatively, be worsened, compared with single optical orthogonal code-based optical label for optical packet switching (SOOC-OPS); however, MOOCS-OPS networks can greatly enlarge the scalability of optical packet switching networks.

  15. Effects of Elastic Resistance Exercise on Muscle Strength and Functional Performance in Healthy Adults: A Systematic Review and Meta-Analysis.

    PubMed

    de Oliveira, Poliana Alves; Blasczyk, Juscelino Castro; Souza Junior, Gerson; Lagoa, Karina Ferreira; Soares, Milene; de Oliveira, Ricardo Jacó; Filho, Paulo José Barbosa Gutierres; Carregaro, Rodrigo Luiz; Martins, Wagner Rodrigues

    2017-04-01

    Elastic Resistance Exercise (ERE) has already demonstrated its effectiveness in older adults and, when combined with the resistance generated by fixed loads, in adults. This review summarizes the effectiveness of ERE performed as isolated method on muscle strength and functional performance in healthy adults. A database search was performed (MEDLine, Cochrane Library, PEDro and Web of Knowledge) to identify controlled clinical trials in English language. The mean difference (MD) with 95% confidence intervals (CIs) and overall effect size were calculated for all comparisons. The PEDro scale was used assess the methodological quality. From the 93 articles identified by the search strategy, 5 met the inclusion criteria, in which 3 presented high quality (PEDro > 6). Meta-analyses demonstrated that the effects of ERE were superior when compared with passive control on functional performance and muscle strength. When compared with active controls, the effect of ERE was inferior on function performance and with similar effect on muscle strength. ERE are effective to improve functional performance and muscle strength when compared with no intervention, in healthy adults. ERE are not superior to other methods of resistance training to improve functional performance and muscle strength in health adults.

  16. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  17. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  18. Comparison of Basic and Ensemble Data Mining Methods in Predicting 5-Year Survival of Colorectal Cancer Patients.

    PubMed

    Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza

    2017-12-01

    Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.

  19. Detection of food intake from swallowing sequences by supervised and unsupervised methods.

    PubMed

    Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L; Neuman, Michael R; Sazonov, Edward

    2010-08-01

    Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone.

  20. Detection of Food Intake from Swallowing Sequences by Supervised and Unsupervised Methods

    PubMed Central

    Lopez-Meyer, Paulo; Makeyev, Oleksandr; Schuckers, Stephanie; Melanson, Edward L.; Neuman, Michael R.; Sazonov, Edward

    2010-01-01

    Studies of food intake and ingestive behavior in free-living conditions most often rely on self-reporting-based methods that can be highly inaccurate. Methods of Monitoring of Ingestive Behavior (MIB) rely on objective measures derived from chewing and swallowing sequences and thus can be used for unbiased study of food intake with free-living conditions. Our previous study demonstrated accurate detection of food intake in simple models relying on observation of both chewing and swallowing. This article investigates methods that achieve comparable accuracy of food intake detection using only the time series of swallows and thus eliminating the need for the chewing sensor. The classification is performed for each individual swallow rather than for previously used time slices and thus will lead to higher accuracy in mass prediction models relying on counts of swallows. Performance of a group model based on a supervised method (SVM) is compared to performance of individual models based on an unsupervised method (K-means) with results indicating better performance of the unsupervised, self-adapting method. Overall, the results demonstrate that highly accurate detection of intake of foods with substantially different physical properties is possible by an unsupervised system that relies on the information provided by the swallowing alone. PMID:20352335

  1. Comparison of gravimetric and gas chromatographic methods for assessing performance of textile materials against liquid pesticide penetration.

    PubMed

    Shaw, Anugrah; Abbi, Ruchika

    2004-01-01

    Penetration of liquid pesticides through textile materials is a criterion for determining the performance of protective clothing used by pesticide handlers. The pipette method is frequently used to apply liquid pesticides onto textile materials to measure penetration. Typically, analytical techniques such as Gas Chromatography (GC) are used to measure percentage penetration. These techniques are labor intensive and costly. A simpler gravimetric method was developed, and tests were conducted to compare the gravimetric and GC methods of analysis. Three types of pesticide formulations and 4 fabrics were used for the study. Diluted pesticide formulations were pipetted onto the test specimens and percentage penetration was measured using the 2 methods. For homogeneous formulation, the results of the two methods were fairly comparable. However, due to the filtering action of the textile materials, there were differences in the percentage penetration between the 2 methods for formulations that were not homogeneous.

  2. Comparative Robustness of Recent Methods for Analyzing Multivariate Repeated Measures Designs

    ERIC Educational Resources Information Center

    Seco, Guillermo Vallejo; Gras, Jaime Arnau; Garcia, Manuel Ato

    2007-01-01

    This study evaluated the robustness of two recent methods for analyzing multivariate repeated measures when the assumptions of covariance homogeneity and multivariate normality are violated. Specifically, the authors' work compares the performance of the modified Brown-Forsythe (MBF) procedure and the mixed-model procedure adjusted by the…

  3. Comparing the Methodologies in ASTM G198 Using Combined Hygrothermal-Corrosion Modeling

    Treesearch

    Samuel L. Zelinka

    2013-01-01

    ASTM G198, “Standard test method for determining the relative corrosion performance of driven fasteners in contact with treated wood,” was accepted by consensus and published in 2011. The method has two different exposure conditions for determining fastener corrosion performance in treated wood. The first method places the wood and embedded fasteners in a...

  4. RRCRank: a fusion method using rank strategy for residue-residue contact prediction.

    PubMed

    Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian

    2017-09-02

    In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which achieves state-of-the-art performance based on the extensive assessment.

  5. Comparability and repeatability of three commonly used methods for measuring endurance capacity.

    PubMed

    Baxter-Gilbert, James; Mühlenhaupt, Max; Whiting, Martin J

    2017-12-01

    Measures of endurance (time to exhaustion) have been used to address a wide range of questions in ecomorphological and physiological research, as well as being used as a proxy for survival and fitness. Swimming, stationary (circular) track running, and treadmill running are all commonly used methods for measuring endurance. Despite the use of these methods across a broad range of taxa, how comparable these methods are to one another, and whether they are biologically relevant, is rarely examined. We used Australian water dragons (Intellagama lesueurii), a species that is morphologically adept at climbing, swimming, and running, to compare these three methods of endurance and examined if there is repeatability within and between trial methods. We found that time to exhaustion was not highly repeatable within a method, suggesting that single measures or a mean time to exhaustion across trials are not appropriate. Furthermore, we compared mean maximal endurance times among the three methods, and found that the two running methods (i.e., stationary track and treadmill) were similar, but swimming was distinctly different, resulting in lower mean maximal endurance times. Finally, an individual's endurance rank was not repeatable across methods, suggesting that the three endurance trial methods are not providing similar information about an individual's performance capacity. Overall, these results highlight the need to carefully match a measure of performance capacity with the study species and the research questions being asked so that the methods being used are behaviorally, ecologically, and physiologically relevant. © 2018 Wiley Periodicals, Inc.

  6. Comparison of glomerular filtration rate determined by use of single-slice dynamic computed tomography and scintigraphy in cats.

    PubMed

    Schmidt, David M; Scrivani, Peter V; Dykes, Nathan L; Goldstein, Richard M; Erb, Hollis N; Reeves, Anthony P

    2012-04-01

    To compare estimation of glomerular filtration rate determined via conventional methods (ie, scintigraphy and plasma clearance of technetium Tc 99m pentetate) and dynamic single-slice computed tomography (CT). 8 healthy adult cats. Scintigraphy, plasma clearance testing, and dynamic CT were performed on each cat on the same day; order of examinations was randomized. Separate observers performed GFR calculations for scintigraphy, plasma clearance testing, or dynamic CT. Methods were compared via Bland-Altman plots and considered interchangeable and acceptable when the 95% limits of agreement (mean difference between methods ± 1.96 SD of the differences) were ≤ 0.7 mL/min/kg. Global GFR differed < 0.7 mL/min/kg in 5 of 8 cats when comparing plasma clearance testing and dynamic CT; the limits of agreement were 1.4 and -1.7 mL/min/kg. The mean ± SD difference was -0.2 ± 0.8 mL/min/kg, and the maximum difference was 1.6 mL/min/kg. The mean ± SD difference (absolute value) for percentage filtration by individual kidneys was 2.4 ± 10.5% when comparing scintigraphy and dynamic CT; the maximum difference was 20%, and the limits of agreement were 18% and 23% (absolute value). GFR estimation via dynamic CT exceeded the definition for acceptable clinical use, compared with results for conventional methods, which was likely attributable to sample size and preventable technical complications. Because 5 of 8 cats had comparable values between methods, further investigation of dynamic CT in a larger sample population with a wide range of GFR values should be performed.

  7. [Abdominal ultrasound and magnetic resonance imaging: a comparative study on the non-alcoholic fatty liver disease diagnosis in morbidly obese patients].

    PubMed

    Chaves, Gabriela Villaça; Pereira, Sílvia Elaine; Saboya, Carlos José; Cortes, Caroline; Ramalho, Rejane

    2009-01-01

    To evaluate the concordance between abdominal ultrasound and an MRI (Magnetic Resonance Imaging) in the diagnosis of non-alcoholic fatty liver disease (NAFLD), and concordance of these two methods with the histopathological exam. The population studied was comprised of 145 patients with morbid obesity (BMI > or = 40 Kg/m(2)), of both genders. NAFLD diagnosis was performed by MRI and Ultrasound. Liver biopsy was performed in a sub-sample (n=40). To evaluate the concordance of these two methods, the kappa coefficient was used. Concordance between both methods (MRI and Ultrasound) was poor and not significant (Kappa adjusted= 0.27; CI 95%= 0.07-0.39.) Nevertheless a slight concordance was found between diagnosis of NAFLD by ultrasound and the hepatic biopsy, with 83.,3% of concordant results and Kappa adjusted= 0.67.Results of an MRI and the histopathological exam were compared and results showed 53.6% of concordant results and kappa adjusted= 0.07. The concordance found in the diagnosis performed using the ultrasound method and the hepatic biopsy, shows a need to implement and perform more research on the use of ultrasound to validate and reconsider these methods. This would minimize the need to perform biopsies to detect and diagnose such disease.

  8. Performance Assessment of Kernel Density Clustering for Gene Expression Profile Data

    PubMed Central

    Zeng, Beiyan; Chen, Yiping P.; Smith, Oscar H.

    2003-01-01

    Kernel density smoothing techniques have been used in classification or supervised learning of gene expression profile (GEP) data, but their applications to clustering or unsupervised learning of those data have not been explored and assessed. Here we report a kernel density clustering method for analysing GEP data and compare its performance with the three most widely-used clustering methods: hierarchical clustering, K-means clustering, and multivariate mixture model-based clustering. Using several methods to measure agreement, between-cluster isolation, and withincluster coherence, such as the Adjusted Rand Index, the Pseudo F test, the r2 test, and the profile plot, we have assessed the effectiveness of kernel density clustering for recovering clusters, and its robustness against noise on clustering both simulated and real GEP data. Our results show that the kernel density clustering method has excellent performance in recovering clusters from simulated data and in grouping large real expression profile data sets into compact and well-isolated clusters, and that it is the most robust clustering method for analysing noisy expression profile data compared to the other three methods assessed. PMID:18629292

  9. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  10. Attitudes of Middle School Students: Learning Online Compared to Face to Face

    ERIC Educational Resources Information Center

    Edwards, Clayton; Rule, Audrey

    2013-01-01

    Education in an online setting is an increasingly popular method of instruction. Previous studies comparing college or high school student performance in online and face-to-face courses found, in most cases, similar achievement between conditions. However, research is lacking regarding middle school students' academic performance and attitudes…

  11. Use of Sounding Out to Improve Spelling in Young Children

    ERIC Educational Resources Information Center

    Mann, Tracie B.; Bushell, Don, Jr.; Morris, Edward K.

    2010-01-01

    We examined the effects of teaching 5 typically developing elementary students to sound out their spelling words while writing them using the cover-copy-compare (CCC) method to practice spelling. Each student's posttest performance following practice with sounding out was compared to that student's posttest performance following practice with no…

  12. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  13. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  14. Comparison of Enzymatic Assay for HBA1C Measurement (Abbott Architect) With Capillary Electrophoresis (Sebia Minicap Flex Piercing Analyser).

    PubMed

    Tesija Kuna, Andrea; Dukic, Kristina; Nikolac Gabaj, Nora; Miler, Marijana; Vukasovic, Ines; Langer, Sanja; Simundic, Ana-Maria; Vrkic, Nada

    2018-03-08

    To compare the analytical performances of the enzymatic method (EM) and capillary electrophoresis (CE) for hemoglobin A1c (HbA1c) measurement. Imprecision, carryover, stability, linearity, method comparison, and interferences were evaluated for HbA1c via EM (Abbott Laboratories, Inc) and CE (Sebia). Both methods have shown overall within-laboratory imprecision of less than 3% for International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units (<2% National Glycohemoglobin Standardization Program [NGSP] units). Carryover effects were within acceptable criteria. The linearity of both methods has proven to be excellent (R2 = 0.999). Significant proportional and constant difference were found for EM, compared with CE, but were not clinically relevant (<5 mmol/mol; NGSP <0.5%). At the clinically relevant HbA1c concentration, stability observed with both methods was acceptable (bias, <3%). Triglyceride levels of 8.11 mmol per L or greater showed to interfere with EM and fetal hemoglobin (HbF) of 10.6% or greater with CE. The enzymatic method proved to be comparable to the CE method in analytical performances; however, certain interferences can influence the measurements of each method.

  15. Rotary-wing aerodynamics. Volume 2: Performance prediction of helicopters

    NASA Technical Reports Server (NTRS)

    Keys, C. N.; Stephniewski, W. Z. (Editor)

    1979-01-01

    Application of theories, as well as, special methods of procedures applicable to performance prediction are illustrated first, on an example of the conventional helicopter and then, winged and tandem configurations. Performance prediction of conventional helicopters in hover and vertical ascent are investigated. Various approaches to performance prediction in forward translation are presented. Performance problems are discussed only this time, a wing is added to the baseline configuration, and both aircraft are compared with respect to their performance. This comparison is extended to a tandem. Appendices on methods for estimating performance guarantees and growth of aircraft concludes this volume.

  16. Promotion Factors For Enlisted Infantry Marines

    DTIC Science & Technology

    2017-06-01

    description , billet accomplishments, mission accomplishment, individual character, leadership, intellect and wisdom, fulfillment of evaluation , RS...staff sergeant. To assess which ranks proportionally promote more high-quality Marines, we compare two performance evaluation methods: proficiency and...adverse fitness reports. From the two performance evaluation methods we find that the Marine Corps promotes proportionally more high-quality Marines

  17. Statistics Student Performance and Anxiety: Comparisons in Course Delivery and Student Characteristics

    ERIC Educational Resources Information Center

    Hedges, Sarai

    2017-01-01

    The statistics education community continues to explore the differences in performance outcomes and in student attitudes between online and face-to-face delivery methods of statistics courses. In this quasi-experimental study student persistence, exam, quiz, and homework scores were compared between delivery methods, class status, and programs of…

  18. The Development of MST Test Information for the Prediction of Test Performances

    ERIC Educational Resources Information Center

    Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.

    2017-01-01

    The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…

  19. Diagnostic performance of blood culture bottles for vitreous culture compared to conventional microbiological cultures in patients with suspected endophthalmitis.

    PubMed

    Kehrmann, Jan; Chapot, Valerie; Buer, Jan; Rating, Philipp; Bornfeld, Norbert; Steinmann, Joerg

    2018-05-01

    The purpose of this investigation was to evaluate the performance of blood culture bottles in comparison to conventional microbiological culture techniques in detecting causative microorganisms of endophthalmitis and to determine their anti-infective susceptibility profiles. All consecutive cases with clinically suspected endophthalmitis in a university-based ophthalmology department between January 2009 and December 2016 were analysed in this retrospective comparative case series. Samples from 247 patients with suspected endophthalmitis underwent microbiological diagnostic work-up. All three culture methods were performed from 140 vitreous specimens. Vitreous fluid specimens were inoculated in blood culture bottles, aerobic and anaerobic broth solutions, and on solid media. Anti-infective susceptibility profiles were evaluated by semi-automated methods and/or gradient diffusion methods. Microorganisms were grown in 82 of 140 specimens for which all methods were performed (59%). Microorganisms were more frequently grown from blood culture bottles (55%) compared to broth solution (45%, p = 0.007) and solid media (33%, p < 0.0001). Considerable differences in the performance among culture media were detected for fungal pathogens. All grown fungi were detected by blood culture bottles (11 of 11, 100%). Broth solution recovered 64% and solid media 46% of grown fungi. No Gram-positive bacterium was resistant to vancomycin and all Gram-negative pathogens except for one isolate were susceptible to third-generation cephalosporins. In suspected endophthalmitis patients, blood culture bottles have a higher overall pathogen detection rate from vitreous fluid compared to conventional microbiological media, especially for fungi. The initial intravitreal antibiotic therapy with vancomycin plus third-generation cephalosporins appears to be an appropriate treatment approach for bacterial endophthalmitis.

  20. Design and performance analysis of gas and liquid radial turbines

    NASA Astrophysics Data System (ADS)

    Tan, Xu

    In the first part of the research, pumps running in reverse as turbines are studied. This work uses experimental data of wide range of pumps representing the centrifugal pumps' configurations in terms of specific speed. Based on specific speed and specific diameter an accurate correlation is developed to predict the performances at best efficiency point of the centrifugal pump in its turbine mode operation. The proposed prediction method yields very good results to date compared to previous such attempts. The present method is compared to nine previous methods found in the literature. The comparison results show that the method proposed in this paper is the most accurate. The proposed method can be further complemented and supplemented by more future tests to increase its accuracy. The proposed method is meaningful because it is based both specific speed and specific diameter. The second part of the research is focused on the design and analysis of the radial gas turbine. The specification of the turbine is obtained from the solar biogas hybrid system. The system is theoretically analyzed and constructed based on the purchased compressor. Theoretical analysis results in a specification of 100lb/min, 900ºC inlet total temperature and 1.575atm inlet total pressure. 1-D and 3-D geometry of the rotor is generated based on Aungier's method. 1-D loss model analysis and 3-D CFD simulations are performed to examine the performances of the rotor. The total-to-total efficiency of the rotor is more than 90%. With the help of CFD analysis, modifications on the preliminary design obtained optimized aerodynamic performances. At last, the theoretical performance analysis on the hybrid system is performed with the designed turbine.

  1. Evaluation and comparison of Abbott Jaffe and enzymatic creatinine methods: Could the old method meet the new requirements?

    PubMed

    Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza

    2018-01-01

    The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.

  2. Bystander fatigue and CPR quality by older bystanders: a randomized crossover trial comparing continuous chest compressions and 30:2 compressions to ventilations.

    PubMed

    Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica

    2016-11-01

    This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; p<0.0001) and more adequate chest compressions (381.5 v. 324.9, mean difference. 62.0; p=0.0001), although good compressions/minute declined significantly faster with the CCC method (p=0.0002). CPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.

  3. Comparing 2 methods of assessing 30-day readmissions: what is the impact on hospital profiling in the veterans health administration?

    PubMed

    Mull, Hillary J; Chen, Qi; O'Brien, William J; Shwartz, Michael; Borzecki, Ann M; Hanchate, Amresh; Rosen, Amy K

    2013-07-01

    The Centers for Medicare and Medicaid Services' (CMS) all-cause readmission measure and the 3M Health Information System Division Potentially Preventable Readmissions (PPR) measure are both used for public reporting. These 2 methods have not been directly compared in terms of how they identify high-performing and low-performing hospitals. To examine how consistently the CMS and PPR methods identify performance outliers, and explore how the PPR preventability component impacts hospital readmission rates, public reporting on CMS' Hospital Compare website, and pay-for-performance under CMS' Hospital Readmission Reduction Program for 3 conditions (acute myocardial infarction, heart failure, and pneumonia). We applied the CMS all-cause model and the PPR software to VA administrative data to calculate 30-day observed FY08-10 VA hospital readmission rates and hospital profiles. We then tested the effect of preventability on hospital readmission rates and outlier identification for reporting and pay-for-performance by replacing the dependent variable in the CMS all-cause model (Yes/No readmission) with the dichotomous PPR outcome (Yes/No preventable readmission). The CMS and PPR methods had moderate correlations in readmission rates for each condition. After controlling for all methodological differences but preventability, correlations increased to >90%. The assessment of preventability yielded different outlier results for public reporting in 7% of hospitals; for 30% of hospitals there would be an impact on Hospital Readmission Reduction Program reimbursement rates. Despite uncertainty over which readmission measure is superior in evaluating hospital performance, we confirmed that there are differences in CMS-generated and PPR-generated hospital profiles for reporting and pay-for-performance, because of methodological differences and the PPR's preventability component.

  4. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  5. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  6. Comparison of performance due to guided hyperlearning, unguided hyperlearning, and conventional learning in mathematics: an empirical study

    NASA Astrophysics Data System (ADS)

    Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.

    2014-07-01

    In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional learning. The participants were from three first-year university classes, numbering 115 students in total. Each group received guided, unguided, or conventional learning methods in one of the three different topics, namely number systems, functions, and graphing. The students' academic performance differed according to the type of learning. Evaluation of the three methods revealed that only guided hyperlearning and conventional learning were appropriate methods for the psychomotor aspects of drawing in the graphing topic. There was no significant difference between the methods when learning the cognitive aspects involved in the number systems topic and the functions topic.

  7. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    NASA Astrophysics Data System (ADS)

    Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.

    2015-07-01

    In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP) by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed yielding an explict cluster attribution for each particle, improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  8. Performance characteristics of two bioassays and high-performance liquid chromatography for determination of flucytosine in serum.

    PubMed Central

    St-Germain, G; Lapierre, S; Tessier, D

    1989-01-01

    We compared the accuracy and precision of two microbiological methods and one high-pressure liquid chromatography (HPLC) procedure used to measure the concentrations of flucytosine in serum. On the basis of an analysis of six standards, all methods were judged reliable within acceptable limits for clinical use. With the biological methods, a slight loss of linearity was observed in the 75- to 100-micrograms/ml range. Compared with the bioassays, the HPLC method did not present linearity problems and was more precise and accurate in the critical zone of 100 micrograms/ml. On average, results obtained with patient sera containing 50 to 100 micrograms of flucytosine per ml were 10.6% higher with the HPLC method than with the bioassays. Standards for the biological assays may be prepared in serum or water. PMID:2802566

  9. Comparison of Performance Due to Guided Hyperlearning, Unguided Hyperlearning, and Conventional Learning in Mathematics: An Empirical Study

    ERIC Educational Resources Information Center

    Fathurrohman, Maman; Porter, Anne; Worthy, Annette L.

    2014-01-01

    In this paper, the use of guided hyperlearning, unguided hyperlearning, and conventional learning methods in mathematics are compared. The design of the research involved a quasi-experiment with a modified single-factor multiple treatment design comparing the three learning methods, guided hyperlearning, unguided hyperlearning, and conventional…

  10. Performance Enhancement Using Selective Reinforcement for Metallic Single- and Multi-Pin Loaded Holes

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.; Seshadri, Banavara R.

    2005-01-01

    An analysis based investigation of aluminum with metal matrix composite selectively reinforced single- and multi-hole specimens was performed and their results compared with results from geometrically comparable non-reinforced specimens. All reinforced specimens exhibited a significant increase in performance. Performance increase of up to 170 percent was achieved. Specimen failure modes were consistent with results from reinforced polymeric matrix composite specimens. Localized reinforcement application (circular) proved as effective as a broader area (strip) reinforcement. Also, selective reinforcement is an excellent method of increasing the performance of multi-hole specimens.

  11. Comparison of a gel microcolumn assay with the conventional tube test for red blood cell alloantibody titration.

    PubMed

    Finck, Rachel; Lui-Deguzman, Carrie; Teng, Shih-Mao; Davis, Rebecca; Yuan, Shan

    2013-04-01

    Titration is a semiquantitative method used to estimate red blood cell (RBC) alloantibody reactivity. The conventional tube test (CTT) technique is the traditional method for performing titration studies. The gel microcolumn assay (GMA) is also a sensitive method to detect RBC alloantibodies. The aim of this study was to compare a GMA with the CTT technique in the performance of Rh and K alloantibody titration. Patient serum samples that contained an RBC alloantibody with a singular specificity were identified by routine blood bank workflow. Parallel titration studies were performed on these samples by both the CTT method and a GMA (ID-Micro Typing System anti-IgG gel card, Micro Typing Systems, Inc., an Ortho-Clinical Diagnostics Company). Forty-eight samples were included, including 11 anti-D, five anti-c, 13 anti-E, one anti-C, three anti-e, and 15 anti-K. Overall, the two methods generated identical results in 21 of 48 samples. For 42 samples (87.5%) the two methods generated results that were within one serial dilution, and for the remaining six samples, results were within two dilutions. GMA systems may perform comparably to the CTT in titrating alloantibodies to Rh and Kell antigens. © 2012 American Association of Blood Banks.

  12. Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks.

    PubMed

    Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin

    2016-06-03

    This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method.

  13. Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks

    PubMed Central

    Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin

    2016-01-01

    This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method. PMID:27271626

  14. Virus removal retention challenge tests performed at lab scale and pilot scale during operation of membrane units.

    PubMed

    Humbert, H; Machinal, C; Labaye, Ivan; Schrotter, J C

    2011-01-01

    The determination of the virus retention capabilities of UF units during operation is essential for the operators of drinking water treatment facilities in order to guarantee an efficient and stable removal of viruses through time. In previous studies, an effective method (MS2-phage challenge tests) was developed by the Water Research Center of Veolia Environnement for the measurement of the virus retention rates (Log Removal Rate, LRV) of commercially available hollow fiber membranes at lab scale. In the present work, the protocol for monitoring membrane performance was transferred from lab scale to pilot scale. Membrane performances were evaluated during pilot trial and compared to the results obtained at lab scale with fibers taken from the pilot plant modules. PFU culture method was compared to RT-PCR method for the calculation of LRV in both cases. Preliminary tests at lab scale showed that both methods can be used interchangeably. For tests conducted on virgin membrane, a good consistency was observed between lab and pilot scale results with the two analytical methods used. This work intends to show that a reliable determination of the membranes performances based on RT-PCR analytical method can be achieved during the operation of the UF units.

  15. Good, better, best? A comprehensive comparison of healthcare providers' performance: An application to physiotherapy practices in primary care.

    PubMed

    Steenhuis, Sander; Groeneweg, Niels; Koolman, Xander; Portrait, France

    2017-12-01

    Most payment methods in healthcare stimulate volume-driven care, rather than value-driven care. Value-based payment methods such as Pay-For-Performance have the potential to reduce costs and improve quality of care. Ideally, outcome indicators are used in the assessment of providers' performance. The aim of this paper is to describe the feasibility of assessing and comparing the performances of providers using a comprehensive set of quality and cost data. We had access to unique and extensive datasets containing individual data on PROMs, PREMs and costs of physiotherapy practices in Dutch primary care. We merged these datasets at the patient-level and compared the performances of these practices using case-mix corrected linear regression models. Several significant differences in performance were detected between practices. These results can be used by both physiotherapists, to improve treatment given, and insurers to support their purchasing decisions. The study demonstrates that it is feasible to compare the performance of providers using PROMs and PREMs. However, it would take an extra effort to increase usefulness and it remains unclear under which conditions this effort is cost-effective. Healthcare providers need to be aware of the added value of registering outcomes to improve their quality. Insurers need to facilitate this by designing value-based contracts with the right incentives. Only then can payment methods contribute to value-based healthcare and increase value for patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Tube thoracostomy training with a medical simulator is associated with faster, more successful performance of the procedure

    PubMed Central

    Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Chung, Hyun Soo

    2016-01-01

    Objective Tube thoracostomy (TT) is a commonly performed intensive care procedure. Simulator training may be a good alternative method for TT training, compared with conventional methods such as apprenticeship and animal skills laboratory. However, there is insufficient evidence supporting use of a simulator. The aim of this study is to determine whether training with medical simulator is associated with faster TT process, compared to conventional training without simulator. Methods This is a simulation study. Eligible participants were emergency medicine residents with very few (≤3 times) TT experience. Participants were randomized to two groups: the conventional training group, and the simulator training group. While the simulator training group used the simulator to train TT, the conventional training group watched the instructor performing TT on a cadaver. After training, all participants performed a TT on a cadaver. The performance quality was measured as correct placement and time delay. Subjects were graded if they had difficulty on process. Results Estimated median procedure time was 228 seconds in the conventional training group and 75 seconds in the simulator training group, with statistical significance (P=0.040). The difficulty grading did not show any significant difference among groups (overall performance scale, 2 vs. 3; P=0.094). Conclusion Tube thoracostomy training with a medical simulator, when compared to no simulator training, is associated with a significantly faster procedure, when performed on a human cadaver. PMID:27752610

  17. Parametric Methods for Dynamic 11C-Phenytoin PET Studies.

    PubMed

    Mansor, Syahir; Yaqub, Maqsood; Boellaard, Ronald; Froklage, Femke E; de Vries, Anke; Bakker, Esther D M; Voskuyl, Rob A; Eriksson, Jonas; Schwarte, Lothar A; Verbeek, Joost; Windhorst, Albert D; Lammertsma, Adriaan A

    2017-03-01

    In this study, the performance of various methods for generating quantitative parametric images of dynamic 11 C-phenytoin PET studies was evaluated. Methods: Double-baseline 60-min dynamic 11 C-phenytoin PET studies, including online arterial sampling, were acquired for 6 healthy subjects. Parametric images were generated using Logan plot analysis, a basis function method, and spectral analysis. Parametric distribution volume (V T ) and influx rate ( K 1 ) were compared with those obtained from nonlinear regression analysis of time-activity curves. In addition, global and regional test-retest (TRT) variability was determined for parametric K 1 and V T values. Results: Biases in V T observed with all parametric methods were less than 5%. For K 1 , spectral analysis showed a negative bias of 16%. The mean TRT variabilities of V T and K 1 were less than 10% for all methods. Shortening the scan duration to 45 min provided similar V T and K 1 with comparable TRT performance compared with 60-min data. Conclusion: Among the various parametric methods tested, the basis function method provided parametric V T and K 1 values with the least bias compared with nonlinear regression data and showed TRT variabilities lower than 5%, also for smaller volume-of-interest sizes (i.e., higher noise levels) and shorter scan duration. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  18. Filter methods to preserve local contrast and to avoid artifacts in gamut mapping

    NASA Astrophysics Data System (ADS)

    Meili, Marcel; Küpper, Dennis; Barańczuk, Zofia; Caluori, Ursina; Simon, Klaus

    2010-01-01

    Contrary to high dynamic range imaging, the preservation of details and the avoidance of artifacts is not explicitly considered in popular color management systems. An effective way to overcome these difficulties is image filtering. In this paper we investigate several image filter concepts for detail preservation as part of a practical gamut mapping strategy. In particular we define four concepts including various image filters and check their performance with a psycho-visual test. Additionally, we compare our performance evaluation to two image quality measures with emphasis on local contrast. Surprisingly, the most simple filter concept performs highly efficient and achieves an image quality which is comparable to the more established but slower methods.

  19. A unified method to process biosolids samples for the recovery of bacterial, viral, and helminths pathogens.

    PubMed

    Alum, Absar; Rock, Channah; Abbaszadegan, Morteza

    2014-01-01

    For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.

  20. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    PubMed

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  1. Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias

    PubMed Central

    Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607

  2. Exploration of Analysis Methods for Diagnostic Imaging Tests: Problems with ROC AUC and Confidence Scores in CT Colonography

    PubMed Central

    Mallett, Susan; Halligan, Steve; Collins, Gary S.; Altman, Doug G.

    2014-01-01

    Background Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. Methods In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Results Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. Conclusions The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests. PMID:25353643

  3. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  4. Using methods from the data mining and machine learning literature for disease classification and prediction: A case study examining classification of heart failure sub-types

    PubMed Central

    Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.

    2014-01-01

    Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592

  5. Disinfection of transvaginal ultrasound probes in a clinical setting: comparative performance of automated and manual reprocessing methods.

    PubMed

    Buescher, D L; Möllers, M; Falkenberg, M K; Amler, S; Kipp, F; Burdach, J; Klockenbusch, W; Schmitz, R

    2016-05-01

    Transvaginal and intracavitary ultrasound probes are a possible source of cross-contamination with microorganisms and thus a risk to patients' health. Therefore appropriate methods for reprocessing are needed. This study was designed to compare the standard disinfection method for transvaginal ultrasound probes in Germany with an automated disinfection method in a clinical setting. This was a prospective randomized controlled clinical study of two groups. In each group, 120 microbial samples were collected from ultrasound transducers before and after disinfection with either an automated method (Trophon EPR®) or a manual method (Mikrozid Sensitive® wipes). Samples were then analyzed for microbial growth and isolates were identified to species level. Automated disinfection had a statistically significantly higher success rate of 91.4% (106/116) compared with 78.8% (89/113) for manual disinfection (P = 0.009). The risk of contamination was increased by 2.9-fold when disinfection was performed manually (odds ratio, 2.9 (95% CI, 1.3-6.3)). Before disinfection, bacterial contamination was observed on 98.8% of probes. Microbial analysis revealed 36 different species of bacteria, including skin and environmental bacteria as well as pathogenic bacteria such as Staphylococcus aureus, enterobacteriaceae and Pseudomonas spp. Considering the high number of contaminated probes and bacterial species found, disinfection of the ultrasound probe's body and handle should be performed after each use to decrease the risk of cross-contamination. This study favored automated disinfection owing to its significantly higher efficacy compared with a manual method. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  6. A Monte Carlo simulation to the performance of the R/S and V/S methods—Statistical revisit and real world application

    NASA Astrophysics Data System (ADS)

    He, Ling-Yun; Qian, Wen-Bin

    2012-07-01

    A correct or precise estimation of the Hurst exponent is one of the fundamentally important problems in the financial economics literature. There are three widely used tools to estimate the Hurst exponent, the canonical rescaled range (R/S), the variance rescaled statistic (V/S) and the Modified rescaled range (Modified R/S). To clarify their performance, we compare them by Monte Carlo simulations; we generate many time-series of a fractal Brownian motion, of a Weierstrass-Mandelbrot cosine fractal function and of a fractionally integrated process, whose theoretical Hurst exponents are known, to compare the Hurst exponents estimated by the three methods. To better understand their pragmatic performance, we further apply all of these methods empirically in real-world applications. Our results imply it is not appropriate to conclude simply which method is better as V/S performs better when the analyzed market is anti-persistent while R/S seems to be a reliable tool used in persistent market.

  7. Comparing In-Class and Out-of-Class Computer-Based Tests to Traditional Paper-and-Pencil Tests in Introductory Psychology Courses

    ERIC Educational Resources Information Center

    Frein, Scott T.

    2011-01-01

    This article describes three experiments comparing paper-and-pencil tests (PPTs) to computer-based tests (CBTs) in terms of test method preferences and student performance. In Experiment 1, students took tests using three methods: PPT in class, CBT in class, and CBT at the time and place of their choosing. Results indicate that test method did not…

  8. Comparison of landmark-based and automatic methods for cortical surface registration

    PubMed Central

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David; Bernstein, Lynne E.; Damasio, Hanna; Leahy, Richard M.

    2009-01-01

    Group analysis of structure or function in cerebral cortex typically involves as a first step the alignment of the cortices. A surface based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and also that automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  9. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  10. Comparison of various extraction techniques for the determination of polycyclic aromatic hydrocarbons in worms.

    PubMed

    Mooibroek, D; Hoogerbrugge, R; Stoffelsen, B H G; Dijkman, E; Berkhoff, C J; Hogendoorn, E A

    2002-10-25

    Two less laborious extraction methods, viz. (i) a simplified liquid extraction using light petroleum or (ii) microwave-assisted solvent extraction (MASE), for the analysis of polycyclic aromatic hydrocarbons (PAHs) in samples of the compost worm Eisenia andrei, were compared with a reference method. After extraction and concentration, analytical methodology consisted of a cleanup of (part) of the extract with high-performance gel permeation chromatography (HPGPC) and instrumental analysis of 15 PAHs with reversed-phase liquid chromatography with fluorescence detection (RPLC-FLD). Comparison of the methods was done by analysing samples with incurred residues (n=15, each method) originating from an experiment in which worms were exposed to a soil contaminated with PAHs. Simultaneously, the performance of the total lipid determination of each method was established. Evaluation of the data by means of principal component analysis (PCA) and analysis of variance (ANOVA) revealed that the performance of the light petroleum method for both the extraction of PAHs (concentration range 1-30 ng/g) and lipid content corresponds very well with the reference method. Compared to the reference method, the MASE method yielded somewhat lower concentrations for the less volatile PAHs, e.g., dibenzo[ah]anthracene and benzo[ghi]perylene and provided a significant higher amount of co-extracted material.

  11. Single-Trial Normalization for Event-Related Spectral Decomposition Reduces Sensitivity to Noisy Trials

    PubMed Central

    Grandchamp, Romain; Delorme, Arnaud

    2011-01-01

    In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498

  12. Semi-active control of tracked vehicle suspension incorporating magnetorheological dampers

    NASA Astrophysics Data System (ADS)

    Ata, W. G.; Salem, A. M.

    2017-05-01

    In past years, the application of magnetorheological (MR) and electrorheological dampers in vehicle suspension has been widely studied, mainly for the purpose of vibration control. This paper presents theoretical study to identify an appropriate semi-active control method for MR-tracked vehicle suspension. Three representative control algorithms are simulated including the skyhook, hybrid and fuzzy-hybrid controllers. A seven degrees-of-freedom tracked vehicle suspension model incorporating MR dampers has been adopted for comparison between the performance of the three controllers. The model differential equations are derived based on Newton's second law of motion and the proposed control methods are developed. The performance of each control method under bump and sinusoidal road profiles for different vehicle speeds is simulated and compared with the performance of the conventional suspension system in time and frequency domains. The results show that the performance of tracked vehicle suspension with MR dampers is substantially improved. Moreover, the fuzzy-hybrid controller offers an excellent integrated performance in reducing the body accelerations as well as wheel bounce responses compared with the classical skyhook and hybrid controllers.

  13. Evaluation of winter pothole patching methods.

    DOT National Transportation Integrated Search

    2014-01-01

    The main objective of this study was to evaluate the performance and cost-effectiveness of the tow-behind combination : infrared asphalt heater/reclaimer patching method and compare it to the throw and roll and spray injection methods. To : achieve t...

  14. Effect of reference genome selection on the performance of computational methods for genome-wide protein-protein interaction prediction.

    PubMed

    Muley, Vijaykumar Yogesh; Ranjan, Akash

    2012-01-01

    Recent progress in computational methods for predicting physical and functional protein-protein interactions has provided new insights into the complexity of biological processes. Most of these methods assume that functionally interacting proteins are likely to have a shared evolutionary history. This history can be traced out for the protein pairs of a query genome by correlating different evolutionary aspects of their homologs in multiple genomes known as the reference genomes. These methods include phylogenetic profiling, gene neighborhood and co-occurrence of the orthologous protein coding genes in the same cluster or operon. These are collectively known as genomic context methods. On the other hand a method called mirrortree is based on the similarity of phylogenetic trees between two interacting proteins. Comprehensive performance analyses of these methods have been frequently reported in literature. However, very few studies provide insight into the effect of reference genome selection on detection of meaningful protein interactions. We analyzed the performance of four methods and their variants to understand the effect of reference genome selection on prediction efficacy. We used six sets of reference genomes, sampled in accordance with phylogenetic diversity and relationship between organisms from 565 bacteria. We used Escherichia coli as a model organism and the gold standard datasets of interacting proteins reported in DIP, EcoCyc and KEGG databases to compare the performance of the prediction methods. Higher performance for predicting protein-protein interactions was achievable even with 100-150 bacterial genomes out of 565 genomes. Inclusion of archaeal genomes in the reference genome set improves performance. We find that in order to obtain a good performance, it is better to sample few genomes of related genera of prokaryotes from the large number of available genomes. Moreover, such a sampling allows for selecting 50-100 genomes for comparable accuracy of predictions when computational resources are limited.

  15. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  16. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    NASA Astrophysics Data System (ADS)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  17. Discrimination of edible oils and fats by combination of multivariate pattern recognition and FT-IR spectroscopy: A comparative study between different modeling methods

    NASA Astrophysics Data System (ADS)

    Javidnia, Katayoun; Parish, Maryam; Karimi, Sadegh; Hemmateenejad, Bahram

    2013-03-01

    By using FT-IR spectroscopy, many researchers from different disciplines enrich the experimental complexity of their research for obtaining more precise information. Moreover chemometrics techniques have boosted the use of IR instruments. In the present study we aimed to emphasize on the power of FT-IR spectroscopy for discrimination between different oil samples (especially fat from vegetable oils). Also our data were used to compare the performance of different classification methods. FT-IR transmittance spectra of oil samples (Corn, Colona, Sunflower, Soya, Olive, and Butter) were measured in the wave-number interval of 450-4000 cm-1. Classification analysis was performed utilizing PLS-DA, interval PLS-DA, extended canonical variate analysis (ECVA) and interval ECVA methods. The effect of data preprocessing by extended multiplicative signal correction was investigated. Whilst all employed method could distinguish butter from vegetable oils, iECVA resulted in the best performances for calibration and external test set with 100% sensitivity and specificity.

  18. Comparison of the performance of different DFT methods in the calculations of the molecular structure and vibration spectra of serotonin (5-hydroxytryptamine, 5-HT)

    NASA Astrophysics Data System (ADS)

    Yang, Yue; Gao, Hongwei

    2012-04-01

    Serotonin (5-hydroxytryptamine, 5-HT) is a monoamine neurotransmitter which plays an important role in treating acute or clinical stress. The comparative performance of different density functional theory (DFT) methods at various basis sets in predicting the molecular structure and vibration spectra of serotonin was reported. The calculation results of different methods including mPW1PW91, HCTH, SVWN, PBEPBE, B3PW91 and B3LYP with various basis sets including LANL2DZ, SDD, LANL2MB, 6-31G, 6-311++G and 6-311+G* were compared with the experimental data. It is remarkable that the SVWN/6-311++G and SVWN/6-311+G* levels afford the best quality to predict the structure of serotonin. The results also indicate that PBEPBE/LANL2DZ level show better performance in the vibration spectra prediction of serotonin than other DFT methods.

  19. Computer game-based and traditional learning method: a comparison regarding students’ knowledge retention

    PubMed Central

    2013-01-01

    Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention. PMID:23442203

  20. Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials

    PubMed Central

    Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.

    2012-01-01

    Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450

  1. Traditional Instruction versus Virtual Reality Simulation: A Comparative Study of Phlebotomy Training among Nursing Students in Kuwait

    ERIC Educational Resources Information Center

    William, Abeer; Vidal, Victoria L.; John, Pamela

    2016-01-01

    This quasi-experimental study compared differences in phlebotomy performance on a live client, between a control group taught through the traditional method and an experimental group using virtual reality simulation. The study showed both groups had performed successfully, using the following metrics: number of reinsertions, pain factor, hematoma…

  2. Performance of Underprepared Students in Traditional versus Animation-Based Flipped-Classroom Settings

    ERIC Educational Resources Information Center

    Gregorius, R. Ma.

    2017-01-01

    Student performance in a flipped classroom with an animation-based content knowledge development system for the bottom third of the incoming first year college students was compared to that in a traditional lecture-based teaching method. 52% of these students withdrew from the traditionally taught General Chemistry course, compared to 22% in a…

  3. Comparison of variance estimators for meta-analysis of instrumental variable estimates

    PubMed Central

    Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F

    2016-01-01

    Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262

  4. [Multifactorial method for assessing the physical work capacity of mice].

    PubMed

    Dubovik, B V; Bogomazov, S D

    1987-01-01

    Based on the swimming test according to Kiplinger, in experiments on (CBA X C57BL)F1 mice there were elaborated criteria for animal performance evaluation in the process of repeated swimming of a standard distance thus measuring power, volume of work and rate of the fatigue development in relative units. From the study of effects of sydnocarb, bemethyl and phenazepam on various parameters of physical performance of mice a conclusion was made that the proposed method provides a more informative evaluation of the pharmacological effect on physical performance of animals as compared to the methods based on the record of time of performing the load.

  5. Performance analysis of successive over relaxation method for solving glioma growth model

    NASA Astrophysics Data System (ADS)

    Hussain, Abida; Faye, Ibrahima; Muthuvalu, Mohana Sundaram

    2016-11-01

    Brain tumor is one of the prevalent cancers in the world that lead to death. In light of the present information of the properties of gliomas, mathematical models have been developed by scientists to quantify the proliferation and invasion dynamics of glioma. In this study, one-dimensional glioma growth model is considered, and finite difference method is used to discretize the problem. Then, two stationary methods, namely Gauss-Seidel (GS) and Successive Over Relaxation (SOR) are used to solve the governing algebraic system. The performance of the methods are evaluated in terms of number of iteration and computational time. On the basis of performance analysis, SOR method is shown to be more superior compared to GS method.

  6. Principal Component Analysis for pulse-shape discrimination of scintillation radiation detectors

    NASA Astrophysics Data System (ADS)

    Alharbi, T.

    2016-01-01

    In this paper, we report on the application of Principal Component analysis (PCA) for pulse-shape discrimination (PSD) of scintillation radiation detectors. The details of the method are described and the performance of the method is experimentally examined by discriminating between neutrons and gamma-rays with a liquid scintillation detector in a mixed radiation field. The performance of the method is also compared against that of the conventional charge-comparison method, demonstrating the superior performance of the method particularly at low light output range. PCA analysis has the important advantage of automatic extraction of the pulse-shape characteristics which makes the PSD method directly applicable to various scintillation detectors without the need for the adjustment of a PSD parameter.

  7. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  8. A comparative analysis of DBSCAN, K-means, and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals.

    PubMed

    Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin

    2015-04-01

    Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  10. A study on expertise of agents and its effects on cooperative Q-learning.

    PubMed

    Araabi, Babak Nadjar; Mastoureshgh, Sahar; Ahmadabadi, Majid Nili

    2007-04-01

    Cooperation in learning (CL) can be realized in a multiagent system, if agents are capable of learning from both their own experiments and other agents' knowledge and expertise. Extra resources are exploited into higher efficiency and faster learning in CL as compared to that of individual learning (IL). In the real world, however, implementation of CL is not a straightforward task, in part due to possible differences in area of expertise (AOE). In this paper, reinforcement-learning homogenous agents are considered in an environment with multiple goals or tasks. As a result, they become expert in different domains with different amounts of expertness. Each agent uses a one-step Q-learning algorithm and is capable of exchanging its Q-table with those of its teammates. Two crucial questions are addressed in this paper: "How the AOE of an agent can be extracted?" and "How agents can improve their performance in CL by knowing their AOEs?" An algorithm is developed to extract the AOE based on state transitions as a gold standard from a behavioral point of view. Moreover, it is discussed that the AOE can be implicitly obtained through agents' expertness in the state level. Three new methods for CL through the combination of Q-tables are developed and examined for overall performance after CL. The performances of developed methods are compared with that of IL, strategy sharing (SS), and weighted SS (WSS). Obtained results show the superior performance of AOE-based methods as compared to that of existing CL methods, which do not use the notion of AOE. These results are very encouraging in support of the idea that "cooperation based on the AOE" performs better than the general CL methods.

  11. Parametric and experimental analysis using a power flow approach

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1988-01-01

    Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.

  12. Improving medical diagnosis reliability using Boosted C5.0 decision tree empowered by Particle Swarm Optimization.

    PubMed

    Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin

    2015-08-01

    Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.

  13. The Effect of Laminar Flow on Rotor Hover Performance

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.; Martin, Preston B.

    2017-01-01

    The topic of laminar flow effects on hover performance is introduced with respect to some historical efforts where laminar flow was either measured or attempted. An analysis method is outlined using combined blade element, momentum method coupled to an airfoil analysis method, which includes the full e(sup N) transition model. The analysis results compared well with the measured hover performance including the measured location of transition on both the upper and lower blade surfaces. The analysis method is then used to understand the upper limits of hover efficiency as a function of disk loading. The impact of laminar flow is higher at low disk loading, but significant improvement in terms of power loading appears possible even up to high disk loading approaching 20 ps f. A optimum planform design equation is derived for cases of zero profile drag and finite drag levels. These results are intended to be a guide for design studies and as a benchmark to compare higher fidelity analysis results. The details of the analysis method are given to enable other researchers to use the same approach for comparison to other approaches.

  14. Smartphone Text Input Method Performance, Usability, and Preference With Younger and Older Adults.

    PubMed

    Smith, Amanda L; Chaparro, Barbara S

    2015-09-01

    User performance, perceived usability, and preference for five smartphone text input methods were compared with younger and older novice adults. Smartphones are used for a variety of functions other than phone calls, including text messaging, e-mail, and web browsing. Research comparing performance with methods of text input on smartphones reveals a high degree of variability in reported measures, procedures, and results. This study reports on a direct comparison of five of the most common input methods among a population of younger and older adults, who had no experience with any of the methods. Fifty adults (25 younger, 18-35 years; 25 older, 60-84 years) completed a text entry task using five text input methods (physical Qwerty, onscreen Qwerty, tracing, handwriting, and voice). Entry and error rates, perceived usability, and preference were recorded. Both age groups input text equally fast using voice input, but older adults were slower than younger adults using all other methods. Both age groups had low error rates when using physical Qwerty and voice, but older adults committed more errors with the other three methods. Both younger and older adults preferred voice and physical Qwerty input to the remaining methods. Handwriting consistently performed the worst and was rated lowest by both groups. Voice and physical Qwerty input methods proved to be the most effective for both younger and older adults, and handwriting input was the least effective overall. These findings have implications to the design of future smartphone text input methods and devices, particularly for older adults. © 2015, Human Factors and Ergonomics Society.

  15. Sparse matrix methods research using the CSM testbed software system

    NASA Technical Reports Server (NTRS)

    Chu, Eleanor; George, J. Alan

    1989-01-01

    Research is described on sparse matrix techniques for the Computational Structural Mechanics (CSM) Testbed. The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the CSM Testbed. Thus, one of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A suite of subroutines to extract from the data base the relevant structural and numerical information about the matrix equations was written, and all the demonstration problems distributed with the testbed were successfully solved. These codes were documented, and performance studies comparing the SPARSPAK technology to the methods currently in the testbed were completed. In addition, some preliminary studies were done comparing some recently developed out-of-core techniques with the performance of the testbed processor INV.

  16. Breast volume assessment: comparing five different techniques.

    PubMed

    Bulstrode, N; Bellamy, E; Shrotria, S

    2001-04-01

    Breast volume assessment is not routinely performed pre-operatively because as yet there is no accepted technique. There have been a variety of methods published, but this is the first study to compare these techniques. We compared volume measurements obtained from mammograms (previously compared to mastectomy specimens) with estimates of volume obtained from four other techniques: thermoplastic moulding, magnetic resonance imaging, Archimedes principle and anatomical measurements. We also assessed the acceptability of each method to the patient. Measurements were performed on 10 women, which produced results for 20 breasts. We were able to calculate regression lines between volume measurements obtained from mammography to the other four methods: (1) magnetic resonance imaging (MRI), 379+(0.75 MRI) [r=0.48], (2) Thermoplastic moulding, 132+(1.46 Thermoplastic moulding) [r=0.82], (3) Anatomical measurements, 168+(1.55 Anatomical measurements) [r=0.83]. (4) Archimedes principle, 359+(0.6 Archimedes principle) [r=0.61] all units in cc. The regression curves for the different techniques are variable and it is difficult to reliably compare results. A standard method of volume measurement should be used when comparing volumes before and after intervention or between individual patients, and it is unreliable to compare volume measurements using different methods. Calculating the breast volume from mammography has previously been compared to mastectomy samples and shown to be reasonably accurate. However we feel thermoplastic moulding shows promise and should be further investigated as it gives not only a volume assessment but a three-dimensional impression of the breast shape, which may be valuable in assessing cosmesis following breast-conserving-surgery.

  17. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    PubMed

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  18. Study of skin model and geometry effects on thermal performance of thermal protective fabrics

    NASA Astrophysics Data System (ADS)

    Zhu, Fanglong; Ma, Suqin; Zhang, Weiyuan

    2008-05-01

    Thermal protective clothing has steadily improved over the years as new materials and improved designs have reached the market. A significant method that has brought these improvements to the fire service is the NFPA 1971 standard on structural fire fighters’ protective clothing. However, this testing often neglects the effects of cylindrical geometry on heat transmission in flame resistant fabrics. This paper deals with methods to develop cylindrical geometry testing apparatus incorporating novel skin bioheat transfer model to test flame resistant fabrics used in firefighting. Results show that fabrics which shrink during the test can have reduced thermal protective performance compared with the qualities measured with a planar geometry tester. Results of temperature differences between skin simulant sensors of planar and cylindrical tester are also compared. This test method provides a new technique to accurately and precisely characterize the thermal performance of thermal protective fabrics.

  19. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  20. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-05-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V. M. Shabaev et al., Phys. Rev. Lett. 93, 130405 (2004)] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely- employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37, 307-15 (1988)]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s1/2-7s1/2 transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach.

  1. Comparison of power curve monitoring methods

    NASA Astrophysics Data System (ADS)

    Cambron, Philippe; Masson, Christian; Tahan, Antoine; Torres, David; Pelletier, Francis

    2017-11-01

    Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM) of wind turbines (WT). In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  2. Dynamic Time Warping compared to established methods for validation of musculoskeletal models.

    PubMed

    Gaspar, Martin; Welke, Bastian; Seehaus, Frank; Hurschler, Christof; Schwarze, Michael

    2017-04-11

    By means of Multi-Body musculoskeletal simulation, important variables such as internal joint forces and moments can be estimated which cannot be measured directly. Validation can ensued by qualitative or by quantitative methods. Especially when comparing time-dependent signals, many methods do not perform well and validation is often limited to qualitative approaches. The aim of the present study was to investigate the capabilities of the Dynamic Time Warping (DTW) algorithm for comparing time series, which can quantify phase as well as amplitude errors. We contrast the sensitivity of DTW with other established metrics: the Pearson correlation coefficient, cross-correlation, the metric according to Geers, RMSE and normalized RMSE. This study is based on two data sets, where one data set represents direct validation and the other represents indirect validation. Direct validation was performed in the context of clinical gait-analysis on trans-femoral amputees fitted with a 6 component force-moment sensor. Measured forces and moments from amputees' socket-prosthesis are compared to simulated forces and moments. Indirect validation was performed in the context of surface EMG measurements on a cohort of healthy subjects with measurements taken of seven muscles of the leg, which were compared to simulated muscle activations. Regarding direct validation, a positive linear relation between results of RMSE and nRMSE to DTW can be seen. For indirect validation, a negative linear relation exists between Pearson correlation and cross-correlation. We propose the DTW algorithm for use in both direct and indirect quantitative validation as it correlates well with methods that are most suitable for one of the tasks. However, in DV it should be used together with methods resulting in a dimensional error value, in order to be able to interpret results more comprehensible. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Effects of four recovery methods on repeated maximal rock climbing performance.

    PubMed

    Heyman, Elsa; DE Geus, Bas; Mertens, Inge; Meeusen, Romain

    2009-06-01

    Considering the development of rock climbing as a competitive sport, we aimed at investigating the influence of four recovery methods on subsequent maximal climbing performance. In a randomly assigned crossover design, 13 female well-trained climbers (27.1 +/- 8.9 yr) came to the climbing center on four occasions separated by 1 wk. On each occasion, they had to perform two climbing tests (C1 and C2) until volitional exhaustion on a prepracticed route (overhanging wall, level 6b). These two tests were separated by 20 min of recovery. Four recovery methods were used in randomized order: passive recovery, active recovery (cycle ergometer, 30-40 W), electromyostimulation on the forearm muscles (bisymmetric TENS current), or cold water immersion of the forearms and arms (three periods of 5 min at 15 +/- 1 degrees C). Climbing tests' performance was reflected by the number of arm movements and climb duration. Using active recovery and cold water immersion, performance at C2 was maintained in comparison with C1, whereas C2 performance was impaired compared with C1 (P< 0.01) using electromyostimulation and passive recovery (recovery method-by-climb interaction, P < 0.05). Blood lactate decreased during recovery, with the greatest decrease occurring during active recovery (time-by-recovery method interaction, P < 0.001). Arms and forearms' skin temperatures were lower throughout the cold water immersion compared with the other three methods (P < 0.001). Active recovery and cold water immersion are two means of preserving performance when repeating acute exhausting climbing trails in female climbers. These positive effects are accompanied by a greater lactate removal and a decrease in subcutaneous tissues temperatures, respectively.

  4. Glycemic penalty index for adequately assessing and comparing different blood glucose control algorithms

    PubMed Central

    Van Herpe, Tom; De Brabanter, Jos; Beullens, Martine; De Moor, Bart; Van den Berghe, Greet

    2008-01-01

    Introduction Blood glucose (BG) control performed by intensive care unit (ICU) nurses is becoming standard practice for critically ill patients. New (semi-automated) 'BG control' algorithms (or 'insulin titration' algorithms) are under development, but these require stringent validation before they can replace the currently used algorithms. Existing methods for objectively comparing different insulin titration algorithms show weaknesses. In the current study, a new approach for appropriately assessing the adequacy of different algorithms is proposed. Methods Two ICU patient populations (with different baseline characteristics) were studied, both treated with a similar 'nurse-driven' insulin titration algorithm targeting BG levels of 80 to 110 mg/dl. A new method for objectively evaluating BG deviations from normoglycemia was founded on a smooth penalty function. Next, the performance of this new evaluation tool was compared with the current standard assessment methods, on an individual as well as a population basis. Finally, the impact of four selected parameters (the average BG sampling frequency, the duration of algorithm application, the severity of disease, and the type of illness) on the performance of an insulin titration algorithm was determined by multiple regression analysis. Results The glycemic penalty index (GPI) was proposed as a tool for assessing the overall glycemic control behavior in ICU patients. The GPI of a patient is the average of all penalties that are individually assigned to each measured BG value based on the optimized smooth penalty function. The computation of this index returns a number between 0 (no penalty) and 100 (the highest penalty). For some patients, the assessment of the BG control behavior using the traditional standard evaluation methods was different from the evaluation with GPI. Two parameters were found to have a significant impact on GPI: the BG sampling frequency and the duration of algorithm application. A higher BG sampling frequency and a longer algorithm application duration resulted in an apparently better performance, as indicated by a lower GPI. Conclusion The GPI is an alternative method for evaluating the performance of BG control algorithms. The blood glucose sampling frequency and the duration of algorithm application should be similar when comparing algorithms. PMID:18302732

  5. Woodstove Emission Sampling Methods Comparability Analysis and In-situ Evaluation of New Technology Woodstoves.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simons, Carl A.

    1988-06-01

    One major objective of this study was to compare several woodstove particulate emission sampling methods under laboratory and in-situ conditions. The laboratory work compared the EPA Method 5H, EPA Method 5G, and OMNI Automated Woodstove Emission Sampler (AWES)/Data LOG'r particulate emission sampling systems. A second major objective of the study was to evaluate the performance of two integral catalytic, two low emission non-catalytic, and two conventional technology woodstoves under in-situ conditions with AWES/Data LOG'r system. The AWES/Data LOG'r and EPA Method 5G sampling systems were also compared in an in-situ test on one of the integral catalytic woodstove models. 7more » figs., 12 tabs.« less

  6. Computer game-based and traditional learning method: a comparison regarding students' knowledge retention.

    PubMed

    Rondon, Silmara; Sassi, Fernanda Chiarion; Furquim de Andrade, Claudia Regina

    2013-02-25

    Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students' prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students' performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students' short and long-term knowledge retention.

  7. Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.

    PubMed

    Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G

    2012-01-01

    Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.

  8. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods.

    PubMed

    Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.

  9. A response to Yu et al. "A forward-backward fragment assembling algorithm for the identification of genomic amplification and deletion breakpoints using high-density single nucleotide polymorphism (SNP) array", BMC Bioinformatics 2007, 8: 145.

    PubMed

    Rueda, Oscar M; Diaz-Uriarte, Ramon

    2007-10-16

    Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).

  10. A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.

    PubMed

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-05-18

    Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.

  11. Classification of LC columns based on the QSRR method and selectivity toward moclobemide and its metabolites.

    PubMed

    Plenis, Alina; Olędzka, Ilona; Bączek, Tomasz

    2013-05-05

    This paper focuses on a comparative study of the column classification system based on the quantitative structure-retention relationships (QSRR method) and column performance in real biomedical analysis. The assay was carried out for the LC separation of moclobemide and its metabolites in human plasma, using a set of 24 stationary phases. The QSRR models established for the studied stationary phases were compared with the column test performance results under two chemometric techniques - the principal component analysis (PCA) and the hierarchical clustering analysis (HCA). The study confirmed that the stationary phase classes found closely related by the QSRR approach yielded comparable separation for moclobemide and its metabolites. Therefore, the QSRR method could be considered supportive in the selection of a suitable column for the biomedical analysis offering the selection of similar or dissimilar columns with a relatively higher certainty. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Ultra-trace levels analysis of microcystins and nodularin in surface water by on-line solid-phase extraction with high-performance liquid chromatography tandem mass spectrometry.

    PubMed

    Balest, Lydia; Murgolo, Sapia; Sciancalepore, Lucia; Montemurro, Patrizia; Abis, Pier Paolo; Pastore, Carlo; Mascolo, Giuseppe

    2016-06-01

    An on-line solid phase extraction coupled with high-performance liquid chromatography in tandem with mass spectrometry (on-line SPE/HPLC/MS-MS) method for the determination of five microcystins and nodularin in surface waters at submicrogram per liter concentrations has been optimized. Maximum recoveries were achieved by carefully optimizing the extraction sample volume, loading solvent, wash solvent, and pH of the sample. The developed method was also validated according to both UNI EN ISO IEC 17025 and UNICHIM guidelines. Specifically, ten analytical runs were performed at three different concentration levels using a reference mix solution containing the six analytes. The method was applied for monitoring the concentrations of microcystins and nodularin in real surface water during a sampling campaign of 9 months in which the ELISA method was used as standard official method. The results of the two methods were compared showing good agreement when the highest concentration values of MCs were found. Graphical abstract An on-line SPE/HPLC/MS-MS method for the determination of five microcystins and nodularin in surface waters at sub μg L(-1) was optimized and compared with ELISA assay method for real samples.

  13. Reconstruction method for running shape of rotor blade considering nonlinear stiffness and loads

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Kang, Da; Zhong, Jingjun

    2017-10-01

    The aerodynamic and centrifugal loads acting on the rotating blade make the blade configuration deformed comparing to its shape at rest. Accurate prediction of the running blade configuration plays a significant role in examining and analyzing turbomachinery performance. Considering nonlinear stiffness and loads, a reconstruction method is presented to address transformation of a rotating blade from cold to hot state. When calculating blade deformations, the blade stiffness and load conditions are updated simultaneously as blade shape varies. The reconstruction procedure is iterated till a converged hot blade shape is obtained. This method has been employed to determine the operating blade shapes of a test rotor blade and the Stage 37 rotor blade. The calculated results are compared with the experiments. The results show that the proposed method used for blade operating shape prediction is effective. The studies also show that this method can improve precision of finite element analysis and aerodynamic performance analysis.

  14. A Review of Depth and Normal Fusion Algorithms

    PubMed Central

    Štolc, Svorad; Pock, Thomas

    2018-01-01

    Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain. PMID:29389903

  15. Impact of case-mix on comparisons of patient-reported experience in NHS acute hospital trusts in England.

    PubMed

    Raleigh, Veena; Sizmur, Steve; Tian, Yang; Thompson, James

    2015-04-01

    To examine the impact of patient-mix on National Health Service (NHS) acute hospital trust scores in two national NHS patient surveys. Secondary analysis of 2012 patient survey data for 57,915 adult inpatients at 142 NHS acute hospital trusts and 45,263 adult emergency department attendees at 146 NHS acute hospital trusts in England. Changes in trust scores for selected questions, ranks, inter-trust variance and score-based performance bands were examined using three methods: no adjustment for case-mix; the current standardization method with weighting for age, sex and, for inpatients only, admission method; and a regression model adjusting in addition for ethnicity, presence of a long-term condition, proxy response (inpatients only) and previous emergency attendances (emergency department survey only). For both surveys, all the variables examined were associated with patients' responses and affected inter-trust variance in scores, although the direction and strength of impact differed between variables. Inter-trust variance was generally greatest for the unadjusted scores and lowest for scores derived from the full regression model. Although trust scores derived from the three methods were highly correlated (Kendall's tau coefficients 0.70-0.94), up to 14% of trusts had discordant ranks of when the standardization and regression methods were compared. Depending on the survey and question, up to 14 trusts changed performance bands when the regression model with its fuller case-mix adjustment was used rather than the current standardization method. More comprehensive case-mix adjustment of patient survey data than the current limited adjustment reduces performance variation between NHS acute hospital trusts and alters the comparative performance bands of some trusts. Given the use of these data for high-impact purposes such as performance assessment, regulation, commissioning, quality improvement and patient choice, a review of the long-standing method for analysing patient survey data would be timely, and could improve rigour and comparability across the NHS. Performance comparisons need to be perceived as fair and scientifically robust to maintain confidence in publicly reported data, and to support their use by both the public and the NHS. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  16. A comparison of DNA extraction procedures for the detection of Mycobacterium ulcerans, the causative agent of Buruli ulcer, in clinical and environmental specimens.

    PubMed

    Durnez, Lies; Stragier, Pieter; Roebben, Karen; Ablordey, Anthony; Leirs, Herwig; Portaels, Françoise

    2009-02-01

    Mycobacterium ulcerans is the causative agent of Buruli ulcer, the third most common mycobacterial disease in humans after tuberculosis and leprosy. Although the disease is associated with aquatic ecosystems, cultivation of the bacillus from the environment is difficult to achieve. Therefore, at the moment, research is based on the detection by PCR of the insertion sequence IS2404 present in M. ulcerans and some closely related mycobacteria. In the present study, we compared four DNA extraction methods for detection of M. ulcerans DNA, namely the one tube cell lysis and DNA extraction procedure (OT), the FastPrep procedure (FP), the modified Boom procedure (MB), and the Maxwell 16 Procedure (M16). The methods were performed on serial dilutions of M. ulcerans, followed by PCR analysis with different PCR targets in M. ulcerans to determine the detection limit (DL) of each method. The purity of the extracted DNA and the time and effort needed were compared as well. All methods were performed on environmental specimens and the two best methods (MB and M16) were tested on clinical specimens for detection of M. ulcerans DNA. When comparing the DLs of the DNA extraction methods, the MB and M16 had a significantly lower DL than the OT and FP. For the different PCR targets, IS2404 showed a significantly lower DL than mlsA, MIRU1, MIRU5 and VNTR6. The FP and M16 were considerably faster than the MB and OT, while the purity of the DNA extracted with the MB was significantly higher than the DNA extracted with the other methods. The MB performed best on the environmental and clinical specimens. This comparative study shows that the modified Boom procedure, although lengthy, provides a better method of DNA extraction than the other methods tested for detection and identification of M. ulcerans in both clinical and environmental specimens.

  17. A Comparison of Mixed-Method Cooling Interventions on Preloaded Running Performance in the Heat.

    PubMed

    Stevens, Christopher J; Bennett, Kyle J M; Sculley, Dean V; Callister, Robin; Taylor, Lee; Dascombe, Ben J

    2017-03-01

    Stevens, CJ, Bennett, KJM, Sculley, DV, Callister, R, Taylor, L, and Dascombe, BJ. A comparison of mixed-method cooling interventions on preloaded running performance in the heat. J Strength Cond Res 31(3): 620-629, 2017-The purpose of this investigation was to assess the effect of combining practical methods to cool the body on endurance running performance and physiology in the heat. Eleven trained male runners completed 4 randomized, preloaded running time trials (20 minutes at 70% V[Combining Dot Above]O2max and a 3 km time trial) on a nonmotorized treadmill in the heat (33° C). Trials consisted of precooling by combined cold-water immersion and ice slurry ingestion (PRE), midcooling by combined facial water spray and menthol mouth rinse (MID), a combination of all methods (ALL), and control (CON). Performance time was significantly faster in MID (13.7 ± 1.2 minutes; p < 0.01) and ALL (13.7 ± 1.4 minutes; p = 0.04) but not PRE (13.9 ± 1.4 minutes; p = 0.24) when compared with CON (14.2 ± 1.2 minutes). Precooling significantly reduced rectal temperature (initially by 0.5 ± 0.2° C), mean skin temperature, heart rate and sweat rate, and increased iEMG activity, whereas midcooling significantly increased expired air volume and respiratory exchange ratio compared with control. Significant decreases in forehead temperature, thermal sensation, and postexercise blood prolactin concentration were observed in all conditions compared with control. Performance was improved with midcooling, whereas precooling had little or no influence. Midcooling may have improved performance through an attenuated inhibitory psychophysiological and endocrine response to the heat.

  18. Comparative Validation of Five Quantitative Rapid Test Kits for the Analysis of Salt Iodine Content: Laboratory Performance, User- and Field-Friendliness

    PubMed Central

    Rohner, Fabian; Kangambèga, Marcelline O.; Khan, Noor; Kargougou, Robert; Garnier, Denis; Sanou, Ibrahima; Ouaro, Bertine D.; Petry, Nicolai; Wirth, James P.; Jooste, Pieter

    2015-01-01

    Background Iodine deficiency has important health and development consequences and the introduction of iodized salt as national programs has been a great public health success in the past decades. To render national salt iodization programs sustainable and ensure adequate iodization levels, simple methods to quantitatively assess whether salt is adequately iodized are required. Several methods claim to be simple and reliable, and are available on the market or are in development. Objective This work has validated the currently available quantitative rapid test kits (quantRTK) in a comparative manner for both their laboratory performance and ease of use in field settings. Methods Laboratory performance parameters (linearity, detection and quantification limit, intra- and inter-assay imprecision) were conducted on 5 quantRTK. We assessed inter-operator imprecision using salt of different quality along with the comparison of 59 salt samples from across the globe; measurements were made both in a laboratory and a field setting by technicians and non-technicians. Results from the quantRTK were compared against iodometric titration for validity. An ‘ease-of-use’ rating system was developed to identify the most suitable quantRTK for a given task. Results Most of the devices showed acceptable laboratory performance, but for some of the devices, use by non-technicians revealed poorer performance when working in a routine manner. Of the quantRTK tested, the iCheck® and I-Reader® showed most consistent performance and ease of use, and a newly developed paper-based method (saltPAD) holds promise if further developed. Conclusions User- and field-friendly devices are now available and the most appropriate quantRTK can be selected depending on the number of samples and the budget available. PMID:26401655

  19. Aggregate Interview Method of ranking orthopedic applicants predicts future performance.

    PubMed

    Geissler, Jacqueline; VanHeest, Ann; Tatman, Penny; Gioe, Terence

    2013-07-01

    This article evaluates and describes a process of ranking orthopedic applicants using what the authors term the Aggregate Interview Method. The authors hypothesized that higher-ranking applicants using this method at their institution would perform better than those ranked lower using multiple measures of resident performance. A retrospective review of 115 orthopedic residents was performed at the authors' institution. Residents were grouped into 3 categories by matching rank numbers: 1-5, 6-14, and 15 or higher. Each rank group was compared with resident performance as measured by faculty evaluations, the Orthopaedic In-Training Examination (OITE), and American Board of Orthopaedic Surgery (ABOS) test results. Residents ranked 1-5 scored significantly better on patient care, behavior, and overall competence by faculty evaluation (P<.05). Residents ranked 1-5 scored higher on the OITE compared with those ranked 6-14 during postgraduate years 2 and 3 (P⩽.5). Graduates who had been ranked 1-5 had a 100% pass rate on the ABOS part 1 examination on the first attempt. The most favorably ranked residents performed at or above the level of other residents in the program; they did not score inferiorly on any measure. These results support the authors' method of ranking residents. The rigorous Aggregate Interview Method for ranking applicants consistently identified orthopedic resident candidates who scored highly on the Accreditation Council for Graduate Medical Education resident core competencies as measured by faculty evaluations, performed above the national average on the OITE, and passed the ABOS part 1 examination at rates exceeding the national average. Copyright 2013, SLACK Incorporated.

  20. Performance and Specificity of the Covalently Linked Immunomagnetic Separation-ATP Method for Rapid Detection and Enumeration of Enterococci in Coastal Environments

    PubMed Central

    Zimmer-Faust, Amity G.; Thulsiraj, Vanessa; Ferguson, Donna

    2014-01-01

    The performance and specificity of the covalently linked immunomagnetic separation-ATP (Cov-IMS/ATP) method for the detection and enumeration of enterococci was evaluated in recreational waters. Cov-IMS/ATP performance was compared with standard methods: defined substrate technology (Enterolert; IDEXX Laboratories), membrane filtration (EPA Method 1600), and an Enterococcus-specific quantitative PCR (qPCR) assay (EPA Method A). We extend previous studies by (i) analyzing the stability of the relationship between the Cov-IMS/ATP method and culture-based methods at different field sites, (ii) evaluating specificity of the assay for seven ATCC Enterococcus species, (iii) identifying cross-reacting organisms binding the antibody-bead complexes with 16S rRNA gene sequencing and evaluating specificity of the assay to five nonenterococcus species, and (iv) conducting preliminary tests of preabsorption as a means of improving the assay. Cov-IMS/ATP was found to perform consistently and with strong agreement rates (based on exceedance/compliance with regulatory limits) of between 83% and 100% compared to the culture-based Enterolert method at a variety of sites with complex inputs. The Cov-IMS/ATP method is specific to five of seven different Enterococcus spp. tested. However, there is potential for nontarget bacteria to bind the antibody, which may be reduced by purification of the IgG serum with preabsorption at problematic sites. The findings of this study help to validate the Cov-IMS/ATP method, suggesting a predictable relationship between the Cov-IMS/ATP method and traditional culture-based methods, which will allow for more widespread application of this rapid and field-portable method for coastal water quality assessment. PMID:24561583

  1. Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Scaglione, Anna

    2013-11-01

    The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.

  2. Text Summarization Model based on Maximum Coverage Problem and its Variant

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    We discuss text summarization in terms of maximum coverage problem and its variant. To solve the optimization problem, we applied some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. We conduct comparative experiments. On the basis of the experimental results, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is at least comparable to the best-performing method of DUC'04.

  3. Morbidity and chronic pain following different techniques of caesarean section: A comparative study.

    PubMed

    Belci, D; Di Renzo, G C; Stark, M; Đurić, J; Zoričić, D; Belci, M; Peteh, L L

    2015-01-01

    Research examining long-term outcomes after childbirth performed with different techniques of caesarean section have been limited and do not provide information on morbidity and neuropathic pain. The study compares two groups of patients submitted to the 'Traditional' method using Pfannenstiel incision and patients submitted to the 'Misgav Ladach' method ≥ 5 years after the operation. We find better long-term postoperative results in the patients that were treated with the Misgav Ladach method compared with the Traditional method. The results were statistically better regarding the intensity of pain, presence of neuropathic and chronic pain and the level of satisfaction about cosmetic appearance of the scar.

  4. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  5. U.S. Geological Survey experience with the residual absolutes method

    NASA Astrophysics Data System (ADS)

    Worthington, E. William; Matzka, Jürgen

    2017-10-01

    The U.S. Geological Survey (USGS) Geomagnetism Program has developed and tested the residual method of absolutes, with the assistance of the Danish Technical University's (DTU) Geomagnetism Program. Three years of testing were performed at College Magnetic Observatory (CMO), Fairbanks, Alaska, to compare the residual method with the null method. Results show that the two methods compare very well with each other and both sets of baseline data were used to process the 2015 definitive data. The residual method will be implemented at the other USGS high-latitude geomagnetic observatories in the summer of 2017 and 2018.

  6. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.

  7. A Thermal Performance Analysis and Comparison of Fiber Coils with the D-CYL Winding and QAD Winding Methods.

    PubMed

    Li, Xuyou; Ling, Weiwei; He, Kunpeng; Xu, Zhenlong; Du, Shitong

    2016-06-16

    The thermal performance under variable temperature conditions of fiber coils with double-cylinder (D-CYL) and quadrupolar (QAD) winding methods is comparatively analyzed. Simulation by the finite element method (FEM) is done to calculate the temperature distribution and the thermal-induced phase shift errors in the fiber coils. Simulation results reveal that D-CYL fiber coil itself has fragile performance when it experiences an axially asymmetrical temperature gradient. However, the axial fragility performance could be improved when the D-CYL coil meshes with a heat-off spool. Through further simulations we find that once the D-CYL coil is provided with an axially symmetrical temperature environment, the thermal performance of fiber coils with the D-CYL winding method is better than that with the QAD winding method under the same variable temperature conditions. This valuable discovery is verified by two experiments. The D-CYL winding method is thus promising to overcome the temperature fragility of interferometric fiber optic gyroscopes (IFOGs).

  8. A Thermal Performance Analysis and Comparison of Fiber Coils with the D-CYL Winding and QAD Winding Methods

    PubMed Central

    Li, Xuyou; Ling, Weiwei; He, Kunpeng; Xu, Zhenlong; Du, Shitong

    2016-01-01

    The thermal performance under variable temperature conditions of fiber coils with double-cylinder (D-CYL) and quadrupolar (QAD) winding methods is comparatively analyzed. Simulation by the finite element method (FEM) is done to calculate the temperature distribution and the thermal-induced phase shift errors in the fiber coils. Simulation results reveal that D-CYL fiber coil itself has fragile performance when it experiences an axially asymmetrical temperature gradient. However, the axial fragility performance could be improved when the D-CYL coil meshes with a heat-off spool. Through further simulations we find that once the D-CYL coil is provided with an axially symmetrical temperature environment, the thermal performance of fiber coils with the D-CYL winding method is better than that with the QAD winding method under the same variable temperature conditions. This valuable discovery is verified by two experiments. The D-CYL winding method is thus promising to overcome the temperature fragility of interferometric fiber optic gyroscopes (IFOGs). PMID:27322271

  9. CSP-TSM: Optimizing the performance of Riemannian tangent space mapping using common spatial pattern for MI-BCI.

    PubMed

    Kumar, Shiu; Mamun, Kabir; Sharma, Alok

    2017-12-01

    Classification of electroencephalography (EEG) signals for motor imagery based brain computer interface (MI-BCI) is an exigent task and common spatial pattern (CSP) has been extensively explored for this purpose. In this work, we focused on developing a new framework for classification of EEG signals for MI-BCI. We propose a single band CSP framework for MI-BCI that utilizes the concept of tangent space mapping (TSM) in the manifold of covariance matrices. The proposed method is named CSP-TSM. Spatial filtering is performed on the bandpass filtered MI EEG signal. Riemannian tangent space is utilized for extracting features from the spatial filtered signal. The TSM features are then fused with the CSP variance based features and feature selection is performed using Lasso. Linear discriminant analysis (LDA) is then applied to the selected features and finally classification is done using support vector machine (SVM) classifier. The proposed framework gives improved performance for MI EEG signal classification in comparison with several competing methods. Experiments conducted shows that the proposed framework reduces the overall classification error rate for MI-BCI by 3.16%, 5.10% and 1.70% (for BCI Competition III dataset IVa, BCI Competition IV Dataset I and BCI Competition IV Dataset IIb, respectively) compared to the conventional CSP method under the same experimental settings. The proposed CSP-TSM method produces promising results when compared with several competing methods in this paper. In addition, the computational complexity is less compared to that of TSM method. Our proposed CSP-TSM framework can be potentially used for developing improved MI-BCI systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  11. Comparative Evaluation of Four Real-Time PCR Methods for the Quantitative Detection of Epstein-Barr Virus from Whole Blood Specimens.

    PubMed

    Buelow, Daelynn; Sun, Yilun; Tang, Li; Gu, Zhengming; Pounds, Stanley; Hayden, Randall

    2016-07-01

    Monitoring of Epstein-Barr virus (EBV) load in immunocompromised patients has become integral to their care. An increasing number of reagents are available for quantitative detection of EBV; however, there are little published comparative data. Four real-time PCR systems (one using laboratory-developed reagents and three using analyte-specific reagents) were compared with one another for detection of EBV from whole blood. Whole blood specimens seeded with EBV were used to determine quantitative linearity, analytical measurement range, lower limit of detection, and CV for each assay. Retrospective testing of 198 clinical samples was performed in parallel with all methods; results were compared to determine relative quantitative and qualitative performance. All assays showed similar performance. No significant difference was found in limit of detection (3.12-3.49 log10 copies/mL; P = 0.37). A strong qualitative correlation was seen with all assays that used clinical samples (positive detection rates of 89.5%-95.8%). Quantitative correlation of clinical samples across assays was also seen in pairwise regression analysis, with R(2) ranging from 0.83 to 0.95. Normalizing clinical sample results to IU/mL did not alter the quantitative correlation between assays. Quantitative EBV detection by real-time PCR can be performed over a wide linear dynamic range, using three different commercially available reagents and laboratory-developed methods. EBV was detected with comparable sensitivity and quantitative correlation for all assays. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  12. To Flip or Not to Flip? An Exploratory Study Comparing Student Performance in Calculus I

    ERIC Educational Resources Information Center

    Schroeder, Larissa B.; McGivney-Burelle, Jean; Xue, Fei

    2015-01-01

    The purpose of this exploratory, mixed-methods study was to compare student performance in flipped and non-flipped sections of Calculus I. The study also examined students' perceptions of the flipping pedagogy. Students in the flipped courses reported spending, on average, an additional 1-2 hours per week outside of class on course content.…

  13. A comparison of online versus face-to-face teaching delivery in statistics instruction for undergraduate health science students.

    PubMed

    Lu, Fletcher; Lemonde, Manon

    2013-12-01

    The objective of this study was to assess if online teaching delivery produces comparable student test performance as the traditional face-to-face approach irrespective of academic aptitude. This study involves a quasi-experimental comparison of student performance in an undergraduate health science statistics course partitioned in two ways. The first partition involves one group of students taught with a traditional face-to-face classroom approach and the other through a completely online instructional approach. The second partition of the subjects categorized the academic aptitude of the students into groups of higher and lower academically performing based on their assignment grades during the course. Controls that were placed on the study to reduce the possibility of confounding variables were: the same instructor taught both groups covering the same subject information, using the same assessment methods and delivered over the same period of time. The results of this study indicate that online teaching delivery is as effective as a traditional face-to-face approach in terms of producing comparable student test performance but only if the student is academically higher performing. For academically lower performing students, the online delivery method produced significantly poorer student test results compared to those lower performing students taught in a traditional face-to-face environment.

  14. Multipolar Ewald methods, 1: theory, accuracy, and performance.

    PubMed

    Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M

    2015-02-10

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.

  15. Distributed processing of a GPS receiver network for a regional ionosphere map

    NASA Astrophysics Data System (ADS)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  16. Impact of hydrogen SAE J2601 fueling methods on fueling time of light-duty fuel cell electric vehicles

    DOE PAGES

    Reddi, Krishna; Elgowainy, Amgad; Rustagi, Neha; ...

    2017-05-16

    Hydrogen fuel cell electric vehicles (HFCEVs) are zero-emission vehicles (ZEVs) that can provide drivers a similar experience to conventional internal combustion engine vehicles (ICEVs), in terms of fueling time and performance (i.e. power and driving range). The Society of Automotive Engineers (SAE) developed fueling protocol J2601 for light-duty HFCEVs to ensure safe vehicle fills while maximizing fueling performance. This study employs a physical model that simulates and compares the fueling performance of two fueling methods, known as the “lookup table” method and the “MC formula” method, within the SAE J2601 protocol. Both the fueling methods provide fast fueling of HFCEVsmore » within minutes, but the MC formula method takes advantage of active measurement of precooling temperature to dynamically control the fueling process, and thereby provides faster vehicle fills. Here, the MC formula method greatly reduces fueling time compared to the lookup table method at higher ambient temperatures, as well as when the precooling temperature falls on the colder side of the expected temperature window for all station types. Although the SAE J2601 lookup table method is the currently implemented standard for refueling hydrogen fuel cell vehicles, the MC formula method provides significant fueling time advantages in certain conditions; these warrant its implementation in future hydrogen refueling stations for better customer satisfaction with fueling experience of HFCEVs.« less

  17. Impact of hydrogen SAE J2601 fueling methods on fueling time of light-duty fuel cell electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddi, Krishna; Elgowainy, Amgad; Rustagi, Neha

    Hydrogen fuel cell electric vehicles (HFCEVs) are zero-emission vehicles (ZEVs) that can provide drivers a similar experience to conventional internal combustion engine vehicles (ICEVs), in terms of fueling time and performance (i.e. power and driving range). The Society of Automotive Engineers (SAE) developed fueling protocol J2601 for light-duty HFCEVs to ensure safe vehicle fills while maximizing fueling performance. This study employs a physical model that simulates and compares the fueling performance of two fueling methods, known as the “lookup table” method and the “MC formula” method, within the SAE J2601 protocol. Both the fueling methods provide fast fueling of HFCEVsmore » within minutes, but the MC formula method takes advantage of active measurement of precooling temperature to dynamically control the fueling process, and thereby provides faster vehicle fills. Here, the MC formula method greatly reduces fueling time compared to the lookup table method at higher ambient temperatures, as well as when the precooling temperature falls on the colder side of the expected temperature window for all station types. Although the SAE J2601 lookup table method is the currently implemented standard for refueling hydrogen fuel cell vehicles, the MC formula method provides significant fueling time advantages in certain conditions; these warrant its implementation in future hydrogen refueling stations for better customer satisfaction with fueling experience of HFCEVs.« less

  18. METHOD FOR EVALUATING MOLD GROWTH ON CEILING TILE

    EPA Science Inventory

    A method to extract mold spores from porous ceiling tiles was developed using a masticator blender. Ceiling tiles were inoculated and analyzed using four species of mold. Statistical analysis comparing results obtained by masticator extraction and the swab method was performed. T...

  19. An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Crooke, S. C.

    1970-01-01

    Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.

  20. Can Online Course-Based Assessment Methods Be Fair and Equitable? Relationships between Students' Preferences and Performance within Online and Offline Assessments

    ERIC Educational Resources Information Center

    Hewson, C.

    2012-01-01

    To address concerns raised regarding the use of online course-based summative assessment methods, a quasi-experimental design was implemented in which students who completed a summative assessment either online or offline were compared on performance scores when using their self-reported "preferred" or "non-preferred" modes.…

  1. Relative Effects of Programmed Instruction and Demonstration Methods on Students' Academic Performance in Science

    ERIC Educational Resources Information Center

    Uhumuavbi, P. O.; Mamudu, J. A.

    2009-01-01

    This study compared the effects of Programmed Instruction and Demonstration methods on students academic performance in science in Esan West Local Government Area of Edo State. A sampling technique (balloting) was used in selecting two schools in Esan West local government area for the study. Two intact classes of fifty (50) students each from the…

  2. The Effects of Teaching and Assessment Methods on Academic Performance: A Study of an Operations Management Course

    ERIC Educational Resources Information Center

    Sacristán-Díaz, Macarena; Garrido-Vega, Pedro; Alfalla-Luque, Rafaela; González-Zamora, María-del-Mar

    2016-01-01

    Whether the use of more active teaching-learning methods has a positive impact on academic performance remains unanswered. This article seeks to contribute to the issue by conducting a study of an Operations Management course with almost 1000 students per year over three consecutive academic years. The study compares three scenarios with differing…

  3. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    ERIC Educational Resources Information Center

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  4. An Investigation into Native and Non-Native Teachers' Judgments of Oral English Performance: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Kim, Youn-Hee

    2009-01-01

    This study used a mixed methods research approach to examine how native English-speaking (NS) and non-native English-speaking (NNS) teachers assess students' oral English performance. The evaluation behaviors of two groups of teachers (12 Canadian NS teachers and 12 Korean NNS teachers) were compared with regard to internal consistency, severity,…

  5. Comparison of computer-assisted instruction (CAI) versus traditional textbook methods for training in abdominal examination (Japanese experience).

    PubMed

    Qayumi, A K; Kurihara, Y; Imai, M; Pachev, G; Seo, H; Hoshino, Y; Cheifetz, R; Matsuura, K; Momoi, M; Saleem, M; Lara-Guerra, H; Miki, Y; Kariya, Y

    2004-10-01

    This study aimed to compare the effects of computer-assisted, text-based and computer-and-text learning conditions on the performances of 3 groups of medical students in the pre-clinical years of their programme, taking into account their academic achievement to date. A fourth group of students served as a control (no-study) group. Participants were recruited from the pre-clinical years of the training programmes in 2 medical schools in Japan, Jichi Medical School near Tokyo and Kochi Medical School near Osaka. Participants were randomly assigned to 4 learning conditions and tested before and after the study on their knowledge of and skill in performing an abdominal examination, in a multiple-choice test and an objective structured clinical examination (OSCE), respectively. Information about performance in the programme was collected from school records and students were classified as average, good or excellent. Student and faculty evaluations of their experience in the study were explored by means of a short evaluation survey. Compared to the control group, all 3 study groups exhibited significant gains in performance on knowledge and performance measures. For the knowledge measure, the gains of the computer-assisted and computer-assisted plus text-based learning groups were significantly greater than the gains of the text-based learning group. The performances of the 3 groups did not differ on the OSCE measure. Analyses of gains by performance level revealed that high achieving students' learning was independent of study method. Lower achieving students performed better after using computer-based learning methods. The results suggest that computer-assisted learning methods will be of greater help to students who do not find the traditional methods effective. Explorations of the factors behind this are a matter for future research.

  6. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  7. SLIC superpixels compared to state-of-the-art superpixel methods.

    PubMed

    Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine

    2012-11-01

    Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

  8. Reinforcement learning algorithms for robotic navigation in dynamic environments.

    PubMed

    Yen, Gary G; Hickey, Travis W

    2004-04-01

    The purpose of this study was to examine improvements to reinforcement learning (RL) algorithms in order to successfully interact within dynamic environments. The scope of the research was that of RL algorithms as applied to robotic navigation. Proposed improvements include: addition of a forgetting mechanism, use of feature based state inputs, and hierarchical structuring of an RL agent. Simulations were performed to evaluate the individual merits and flaws of each proposal, to compare proposed methods to prior established methods, and to compare proposed methods to theoretically optimal solutions. Incorporation of a forgetting mechanism did considerably improve the learning times of RL agents in a dynamic environment. However, direct implementation of a feature-based RL agent did not result in any performance enhancements, as pure feature-based navigation results in a lack of positional awareness, and the inability of the agent to determine the location of the goal state. Inclusion of a hierarchical structure in an RL agent resulted in significantly improved performance, specifically when one layer of the hierarchy included a feature-based agent for obstacle avoidance, and a standard RL agent for global navigation. In summary, the inclusion of a forgetting mechanism, and the use of a hierarchically structured RL agent offer substantially increased performance when compared to traditional RL agents navigating in a dynamic environment.

  9. Nested-PCR and a new ELISA-based NovaLisa test kit for malaria diagnosis in an endemic area of Thailand.

    PubMed

    Thongdee, Pimwan; Chaijaroenkul, Wanna; Kuesap, Jiraporn; Na-Bangchang, Kesara

    2014-08-01

    Microscopy is considered as the gold standard for malaria diagnosis although its wide application is limited by the requirement of highly experienced microscopists. PCR and serological tests provide efficient diagnostic performance and have been applied for malaria diagnosis and research. The aim of this study was to investigate the diagnostic performance of nested PCR and a recently developed an ELISA-based new rapid diagnosis test (RDT), NovaLisa test kit, for diagnosis of malaria infection, using microscopic method as the gold standard. The performance of nested-PCR as a malaria diagnostic tool is excellent with respect to its high accuracy, sensitivity, specificity, and ability to discriminate Plasmodium species. The sensitivity and specificity of nested-PCR compared with the microscopic method for detection of Plasmodium falciparum, Plasmodium vivax, and P. falciparum/P. vivax mixed infection were 71.4 vs 100%, 100 vs 98.7%, and 100 vs 95.0%, respectively. The sensitivity and specificity of the ELISA-based NovaLisa test kit compared with the microscopic method for detection of Plasmodium genus were 89.0 vs 91.6%, respectively. NovaLisa test kit provided comparable diagnostic performance. Its relatively low cost, simplicity, and rapidity enables large scale field application.

  10. Cardiac arrest risk standardization using administrative data compared to registry data

    PubMed Central

    Gaieski, David F.; Donnino, Michael W.; Nelson, Joshua I. M.; Mutter, Eric L.; Carr, Brendan G.; Abella, Benjamin S.; Wiebe, Douglas J.

    2017-01-01

    Background Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Methods and results Two risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Conclusions Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA. PMID:28783754

  11. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  12. Representing ductile damage with the dual domain material point method

    DOE PAGES

    Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...

    2015-12-14

    In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less

  13. Comparison of genetic algorithm methods for fuel management optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1995-12-31

    The CIGARO system was developed for genetic algorithm fuel management optimization. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search method. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the optimization. The genetic algorithm performed significantly better than the random search method, which did not even satisfy the peak normalized power constraint.

  14. Stochastic rainfall synthesis for urban applications using different regionalization methods

    NASA Astrophysics Data System (ADS)

    Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.

    2017-12-01

    The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.

  15. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  16. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  17. Comparaison de méthodes d'identification des paramètres d'une machine asynchrone

    NASA Astrophysics Data System (ADS)

    Bellaaj-Mrabet, N.; Jelassi, K.

    1998-07-01

    Interests, in Genetic Algorithms (G.A.) expands rapidly. This paper consists initially to apply G.A. for identifying induction motor parameters. Next, we compare the performances with classical methods like Maximum Likelihood and classical electrotechnical methods. These methods are applied on three induction motors of different powers to compare results following a set of criteria. Les algorithmes génétiques sont des méthodes adaptatives de plus en plus utilisée pour la résolution de certains problèmes d'optimisation. Le présent travail consiste d'une part, à mettre en œuvre un A.G sur des problèmes d'identification des machines électriques, et d'autre part à comparer ses performances avec les méthodes classiques tels que la méthode du maximum de vraisemblance et la méthode électrotechnique basée sur des essais à vides et en court-circuit. Ces méthodes sont appliquées sur des machines asynchrones de différentes puissances. Les résultats obtenus sont comparés selon certains critères, permettant de conclure sur la validité et la performance de chaque méthode.

  18. Cost-effectiveness of peer role play and standardized patients in undergraduate communication training.

    PubMed

    Bosse, Hans Martin; Nickel, Martin; Huwendiek, Sören; Schultz, Jobst Hendrik; Nikendei, Christoph

    2015-10-24

    The few studies directly comparing the methodological approach of peer role play (RP) and standardized patients (SP) for the delivery of communication skills all suggest that both methods are effective. In this study we calculated the costs of both methods (given comparable outcomes) and are the first to generate a differential cost-effectiveness analysis of both methods. Medical students in their prefinal year were randomly assigned to one of two groups receiving communication training in Pediatrics either with RP (N = 34) or 19 individually trained SP (N = 35). In an OSCE with standardized patients using the Calgary-Cambridge Referenced Observation Guide both groups achieved comparable high scores (results published). In this study, corresponding costs were assessed as man-hours resulting from hours of work of SP and tutors. A cost-effectiveness analysis was performed. Cost-effectiveness analysis revealed a major advantage for RP as compared to SP (112 vs. 172 man hours; cost effectiveness ratio .74 vs. .45) at comparable performance levels after training with both methods. While both peer role play and training with standardized patients have their value in medical curricula, RP has a major advantage in terms of cost-effectiveness. This could be taken into account in future decisions.

  19. Transformation From a Conventional Clinical Microbiology Laboratory to Full Automation.

    PubMed

    Moreno-Camacho, José L; Calva-Espinosa, Diana Y; Leal-Leyva, Yoseli Y; Elizalde-Olivas, Dolores C; Campos-Romero, Abraham; Alcántar-Fernández, Jonathan

    2017-12-22

    To validate the performance, reproducibility, and reliability of BD automated instruments in order to establish a fully automated clinical microbiology laboratory. We used control strains and clinical samples to assess the accuracy, reproducibility, and reliability of the BD Kiestra WCA, the BD Phoenix, and BD Bruker MALDI-Biotyper instruments and compared them to previously established conventional methods. The following processes were evaluated: sample inoculation and spreading, colony counts, sorting of cultures, antibiotic susceptibility test, and microbial identification. The BD Kiestra recovered single colonies in less time than conventional methods (e.g. E. coli, 7h vs 10h, respectively) and agreement between both methodologies was excellent for colony counts (κ=0.824) and sorting cultures (κ=0.821). Antibiotic susceptibility tests performed with BD Phoenix and disk diffusion demonstrated 96.3% agreement with both methods. Finally, we compared microbial identification in BD Phoenix and Bruker MALDI-Biotyper and observed perfect agreement (κ=1) and identification at a species level for control strains. Together these instruments allow us to process clinical urine samples in 36h (effective time). The BD automated technologies have improved performance compared with conventional methods, and are suitable for its implementation in very busy microbiology laboratories. © American Society for Clinical Pathology 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  20. Mortality risk prediction in burn injury: Comparison of logistic regression with machine learning approaches.

    PubMed

    Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W

    2015-08-01

    Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  1. Three-dimensional compound comparison methods and their application in drug discovery.

    PubMed

    Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke

    2015-07-16

    Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  2. 37: COMPARISON OF TWO METHODS: TBL-BASED AND LECTURE-BASED LEARNING IN NURSING CARE OF PATIENTS WITH DIABETES IN NURSING STUDENTS

    PubMed Central

    Khodaveisi, Masoud; Qaderian, Khosro; Oshvandi, Khodayar; Soltanian, Ali Reza; Vardanjani, Mehdi molavi

    2017-01-01

    Background and aims learning plays an important role in developing nursing skills and right care-taking. The Present study aims to evaluate two learning methods based on team –based learning and lecture-based learning in learning care-taking of patients with diabetes in nursing students. Method In this quasi-experimental study, 64 students in term 4 in nursing college of Bukan and Miandoab were included in the study based on knowledge and performance questionnaire including 15 questions based on knowledge and 5 questions based on performance on care-taking in patients with diabetes were used as data collection tool whose reliability was confirmed by cronbach alpha (r=0.83) by the researcher. To compare the mean score of knowledge and performance in each group in pre-test step and post-test step, pair –t test and to compare mean of scores in two groups of control and intervention, the independent t- test was used. Results There was not significant statistical difference between two groups in pre terms of knowledge and performance score (p=0.784). There was significant difference between the mean of knowledge scores and diabetes performance in the post-test in the team-based learning group and lecture-based learning group (p=0.001). There was significant difference between the mean score of knowledge of diabetes care in pre-test and post-test in base learning groups (p=0.001). Conclusion In both methods team-based and lecture-based learning approaches resulted in improvement in learning in students, but the rate of learning in the team-based learning approach is greater compared to that of lecture-based learning and it is recommended that this method be used as a higher education method in the education of students.

  3. Nuclear radiation environment analysis for thermoelectric outer planet spacecraft

    NASA Technical Reports Server (NTRS)

    Davis, H. S.; Koprowski, E. F.

    1972-01-01

    Neutron and gamma ray transport calculations were performed using Monte Carlo methods and a three-dimensional geometric model of the spacecraft. The results are compared with similar calculations performed for an earlier design.

  4. Comparison of answer-until-correct and full-credit assessments in a team-based learning course.

    PubMed

    Farland, Michelle Z; Barlow, Patrick B; Levi Lancaster, T; Franks, Andrea S

    2015-03-25

    To assess the impact of awarding partial credit to team assessments on team performance and on quality of team interactions using an answer-until-correct method compared to traditional methods of grading (multiple-choice, full-credit). Subjects were students from 3 different offerings of an ambulatory care elective course, taught using team-based learning. The control group (full-credit) consisted of those enrolled in the course when traditional methods of assessment were used (2 course offerings). The intervention group consisted of those enrolled in the course when answer-until-correct method was used for team assessments (1 course offering). Study outcomes included student performance on individual and team readiness assurance tests (iRATs and tRATs), individual and team final examinations, and student assessment of quality of team interactions using the Team Performance Scale. Eighty-four students enrolled in the courses were included in the analysis (full-credit, n=54; answer-until-correct, n=30). Students who used traditional methods of assessment performed better on iRATs (full-credit mean 88.7 (5.9), answer-until-correct mean 82.8 (10.7), p<0.001). Students who used answer-until-correct method of assessment performed better on the team final examination (full-credit mean 45.8 (1.5), answer-until-correct 47.8 (1.4), p<0.001). There was no significant difference in performance on tRATs and the individual final examination. Students who used the answer-until-correct method had higher quality of team interaction ratings (full-credit 97.1 (9.1), answer-until-correct 103.0 (7.8), p=0.004). Answer-until-correct assessment method compared to traditional, full-credit methods resulted in significantly lower scores for iRATs, similar scores on tRATs and individual final examinations, improved scores on team final examinations, and improved perceptions of the quality of team interactions.

  5. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    PubMed

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.

  6. Comparative study on gene set and pathway topology-based enrichment methods.

    PubMed

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.

  7. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  8. Automatic Keyword Identification by Artificial Neural Networks Compared to Manual Identification by Users of Filtering Systems.

    ERIC Educational Resources Information Center

    Boger, Zvi; Kuflik, Tsvi; Shoval, Peretz; Shapira, Bracha

    2001-01-01

    Discussion of information filtering (IF) and information retrieval focuses on the use of an artificial neural network (ANN) as an alternative method for both IF and term selection and compares its effectiveness to that of traditional methods. Results show that the ANN relevance prediction out-performs the prediction of an IF system. (Author/LRW)

  9. A comparison of models for estimating potential evapotranspiration for Florida land cover types

    USGS Publications Warehouse

    Douglas, Ellen M.; Jacobs, Jennifer M.; Sumner, David M.; Ray, Ram L.

    2013-01-01

    We analyzed observed daily evapotranspiration (DET) at 18 sites having measured DET and ancillary climate data and then used these data to compare the performance of three common methods for estimating potential evapotranspiration (PET): the Turc method (Tc), the Priestley-Taylor method (PT) and the Penman-Monteith method (PM). The sites were distributed throughout the State of Florida and represent a variety of land cover types: open water (3), marshland (4), grassland/pasture (4), citrus (2) and forest (5). Not surprisingly, the highest DET values occurred at the open water sites, ranging from an average of 3.3 mm d-1 in the winter to 5.3 mm d-1 in the spring. DET at the marsh sites was also high, ranging from 2.7 mm d-1 in winter to 4.4 mm d-1 in summer. The lowest DET occurred in the winter and fall seasons at the grass sites (1.3 mm d-1 and 2.0 mm d-1, respectively) and at the forested sites (1.8 mm d-1 and 2.3 mm d-1, respectively). The performance of the three methods when applied to conditions close to PET (Bowen ratio ≤ 1) was used to judge relative merit. Under such PET conditions, annually aggregated Tc and PT methods perform comparably and outperform the PM method, possibly due to the sensitivity of the PM method to the limited transferability of previously determined model parameters. At a daily scale, the PT performance appears to be superior to the other two methods for estimating PET for a variety of land covers in Florida.

  10. Image enhancement for on-site X-ray nondestructive inspection of reinforced concrete structures.

    PubMed

    Pei, Cuixiang; Wu, Wenjing; Ueaska, Mitsuru

    2016-11-22

    The use of portable and high-energy X-ray system can provide a very promising approach for on-site nondestructive inspection of inner steel reinforcement of concrete structures. However, the noise properties and contrast of the radiographic images for thick concrete structures do often not meet the demands. To enhance the images, we present a simple and effective method for noise reduction based on a combined curvelet-wavelet transform and local contrast enhancement based on neighborhood operation. To investigate the performance of this method for our X-ray system, we have performed several experiments with using simulated and experimental data. With comparing to other traditional methods, it shows that the proposed image enhancement method has a better performance and can significantly improve the inspection performance for reinforced concrete structures.

  11. Measuring relative performance of an EDS detector using a NiO standard.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugar, Joshua Daniel; Kotula, Paul Gabriel

    2013-09-01

    A method for measuring the relative performance of energy dispersive spectrometers (EDS) on a TEM is discussed. A NiO thin-film standard fabricated at Sandia CA is used. A performance parameter,, is measured and compared to values on several TEM systems.

  12. Automatic frequency and phase alignment of in vivo J-difference-edited MR spectra by frequency domain correlation.

    PubMed

    Wiegers, Evita C; Philips, Bart W J; Heerschap, Arend; van der Graaf, Marinette

    2017-12-01

    J-difference editing is often used to select resonances of compounds with coupled spins in 1 H-MR spectra. Accurate phase and frequency alignment prior to subtracting J-difference-edited MR spectra is important to avoid artefactual contributions to the edited resonance. In-vivo J-difference-edited MR spectra were aligned by maximizing the normalized scalar product between two spectra (i.e., the correlation over a spectral region). The performance of our correlation method was compared with alignment by spectral registration and by alignment of the highest point in two spectra. The correlation method was tested at different SNR levels and for a broad range of phase and frequency shifts. In-vivo application of the proposed correlation method showed reduced subtraction errors and increased fit reliability in difference spectra as compared with conventional peak alignment. The correlation method and the spectral registration method generally performed equally well. However, better alignment using the correlation method was obtained for spectra with a low SNR (down to ~2) and for relatively large frequency shifts. Our correlation method for simultaneously phase and frequency alignment is able to correct both small and large phase and frequency drifts and also performs well at low SNR levels.

  13. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping.

    PubMed

    Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J

    2017-01-01

    Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.

  14. Phenols removal using ozonation-adsorption with granular activated carbon (GAC) in rotating packed bed reactor

    NASA Astrophysics Data System (ADS)

    Karamah, E. F.; Leonita, S.; Bismo, S.

    2018-01-01

    Synthetic wastewater containing phenols was treated using combination method of ozonation-adsorption with GAC (Granular Activated Carbon) in a packed bed rotating reactor. Ozone reacts quickly with phenol and activated carbon increases the oxidation process by producing hydroxyl radicals. Performance parameters evaluated are phenol removal percentage, the quantity of hydroxyl radical formed, changes in pH and ozone utilization, dissolved ozone concentration and ozone concentration in off gas. The performance of the combination method was compared with single ozonation and single adsorption. The influence of GAC dose and initial pH of phenols were evaluated in ozonation-adsorption method. The results show that ozonation-adsorption method generates more OH radicals than a single ozonation. Quantity of OH radical formation increases with increasing pH and quantity of the GAC. The combination method prove better performance in removing phenols. At the same operation condition, ozonation-adsorption method is capable of removing of 78.62% phenols as compared with single ozonation (53.15%) and single adsorption (36.67%). The increasing percentage of phenol removal in ozonation-adsorption method is proportional to the addition of GAC dose, solution pH, and packed bed rotator speed. Maximum percentage of phenol removal is obtained under alkaline conditions (pH 10) and 125 g of GAC

  15. Comparison of three commercially available fit-test methods.

    PubMed

    Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J

    2002-01-01

    American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.

  16. Classifying four-category visual objects using multiple ERP components in single-trial ERP.

    PubMed

    Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin

    2016-08-01

    Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.

  17. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  18. Comparative evaluation of gel column agglutination and erythrocyte magnetized technology for red blood cell alloantibody titration.

    PubMed

    Dubey, Anju; Sonker, Atul; Chaudhary, Rajendra K

    2015-01-01

    Antibody titration is traditionally performed using a conventional test tube (CTT) method, which is subjected to interlaboratory variations because of a lack of standardization and reproducibility. The aim of this study is to compare newer methods such as get column technology (GCT) and erythrocyte magnetized technology (EMT) for antibody titration in terms of accuracy and precision. Patient serum samples that contained immunoglobin G (IgG) red blood cell (RBC) alloantibodies of a single specificity for Rh or K anitgens were identified during routine transfusion service testing and stored. Titration and scoring were performed separately by and stored. Titration and scoring were performed separately by different laboratory personnel on CTT, GCT, and EMT. Testing was performed a total of three times on each sample. Results were analyzed for accuracy and precision. A total of 50 samples were tested. Only 20 percent of samples tested with GCT shoed titers identical to CTT, whereas 48 percent of samples tested with EMT showed titers identical to CTT. Overall, the mean of th titer difference from CTT was higher using GCT (+0.31) compared with that using EMT (+0.13). Precision shown by CTT was 30 percent, EMT was 76 percent, and GCT was 92 percent on repeat testing. GCT showed higher titer values in comparison with CTT but was found to be the most precise. EMT titers were comparable to CTT, and its precision was intermediate. Further studies to validate this method are required.

  19. Denoising of Raman spectroscopy for biological samples based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    León-Bejarano, Fabiola; Ramírez-Elías, Miguel; Mendez, Martin O.; Dorantes-Méndez, Guadalupe; Rodríguez-Aranda, Ma. Del Carmen; Alba, Alfonso

    Raman spectroscopy of biological samples presents undesirable noise and fluorescence generated by the biomolecular excitation. The reduction of these types of noise is a fundamental task to obtain the valuable information of the sample under analysis. This paper proposes the application of the empirical mode decomposition (EMD) for noise elimination. EMD is a parameter-free and adaptive signal processing method useful for the analysis of nonstationary signals. EMD performance was compared with the commonly used Vancouver algorithm (VRA) through artificial data (Teflon), synthetic (Vitamin E and paracetamol) and biological (Mouse brain and human nails) Raman spectra. The correlation coefficient (ρ) was used as performance measure. Results on synthetic data showed a better performance of EMD (ρ=0.52) at high noise levels compared with VRA (ρ=0.19). The methods with simulated fluorescence added to artificial material exhibited a similar shape of fluorescence in both cases (ρ=0.95 for VRA and ρ=0.93 for EMD). For synthetic data, Raman spectra of vitamin E were used and the results showed a good performance comparing both methods (ρ=0.95 for EMD and ρ=0.99 for VRA). Finally, in biological data, EMD and VRA displayed a similar behavior (ρ=0.85 for EMD and ρ=0.96 for VRA), but with the advantage that EMD maintains small amplitude Raman peaks. The results suggest that EMD could be an effective method for denoising biological Raman spectra, EMD is able to retain information and correctly eliminates the fluorescence without parameter tuning.

  20. Comparing methods suitable for monitoring marine mammals in low visibility conditions during seismic surveys.

    PubMed

    Verfuss, Ursula K; Gillespie, Douglas; Gordon, Jonathan; Marques, Tiago A; Miller, Brianne; Plunkett, Rachael; Theriault, James A; Tollit, Dominic J; Zitterbart, Daniel P; Hubert, Philippe; Thomas, Len

    2018-01-01

    Loud sound emitted during offshore industrial activities can impact marine mammals. Regulations typically prescribe marine mammal monitoring before and/or during these activities to implement mitigation measures that minimise potential acoustic impacts. Using seismic surveys under low visibility conditions as a case study, we review which monitoring methods are suitable and compare their relative strengths and weaknesses. Passive acoustic monitoring has been implemented as either a complementary or alternative method to visual monitoring in low visibility conditions. Other methods such as RADAR, active sonar and thermal infrared have also been tested, but are rarely recommended by regulatory bodies. The efficiency of the monitoring method(s) will depend on the animal behaviour and environmental conditions, however, using a combination of complementary systems generally improves the overall detection performance. We recommend that the performance of monitoring systems, over a range of conditions, is explored in a modelling framework for a variety of species. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. The BUME method: a new rapid and simple chloroform-free method for total lipid extraction of animal tissue

    NASA Astrophysics Data System (ADS)

    Löfgren, Lars; Forsberg, Gun-Britt; Ståhlman, Marcus

    2016-06-01

    In this study we present a simple and rapid method for tissue lipid extraction. Snap-frozen tissue (15-150 mg) is collected in 2 ml homogenization tubes. 500 μl BUME mixture (butanol:methanol [3:1]) is added and automated homogenization of up to 24 frozen samples at a time in less than 60 seconds is performed, followed by a 5-minute single-phase extraction. After the addition of 500 μl heptane:ethyl acetate (3:1) and 500 μl 1% acetic acid a 5-minute two-phase extraction is performed. Lipids are recovered from the upper phase by automated liquid handling using a standard 96-tip robot. A second two-phase extraction is performed using 500 μl heptane:ethyl acetate (3:1). Validation of the method showed that the extraction recoveries for the investigated lipids, which included sterols, glycerolipids, glycerophospholipids and sphingolipids were similar or better than for the Folch method. We also applied the method for lipid extraction of liver and heart and compared the lipid species profiles with profiles generated after Folch and MTBE extraction. We conclude that the BUME method is superior to the Folch method in terms of simplicity, through-put, automation, solvent consumption, economy, health and environment yet delivering lipid recoveries fully comparable to or better than the Folch method.

  2. An optimized computational method for determining the beta dose distribution using a multiple-element thermoluminescent dosimeter system.

    PubMed

    Shen, L; Levine, S H; Catchen, G L

    1987-07-01

    This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.

  3. Comparative analysis of methods for detecting interacting loci

    PubMed Central

    2011-01-01

    Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295

  4. Comparative analysis of methods for detecting interacting loci.

    PubMed

    Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue

    2011-07-05

    Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.

  5. The Effect of the Math Emporium Instructional Method on Students' Performance in College Algebra

    ERIC Educational Resources Information Center

    Cousins-Cooper, Kathy; Staley, Katrina N.; Kim, Seongtae; Luke, Nicholas S.

    2017-01-01

    This study aims to investigate the effectiveness of the Emporium instructional method in a course of college algebra and trigonometry by comparing to the traditional lecture method. The math emporium method is a nontraditional instructional method of learning math that has been implemented at several universities with much success and has been…

  6. Evaluation of analytical performance of a new high-sensitivity immunoassay for cardiac troponin I.

    PubMed

    Masotti, Silvia; Prontera, Concetta; Musetti, Veronica; Storti, Simona; Ndreu, Rudina; Zucchelli, Gian Carlo; Passino, Claudio; Clerico, Aldo

    2018-02-23

    The study aim was to evaluate and compare the analytical performance of the new chemiluminescent immunoassay for cardiac troponin I (cTnI), called Access hs-TnI using DxI platform, with those of Access AccuTnI+3 method, and high-sensitivity (hs) cTnI method for ARCHITECT platform. The limits of blank (LoB), detection (LoD) and quantitation (LoQ) at 10% and 20% CV were evaluated according to international standardized protocols. For the evaluation of analytical performance and comparison of cTnI results, both heparinized plasma samples, collected from healthy subjects and patients with cardiac diseases, and quality control samples distributed in external quality assessment programs were used. LoB, LoD and LoQ at 20% and 10% CV values of the Access hs-cTnI method were 0.6, 1.3, 2.1 and 5.3 ng/L, respectively. Access hs-cTnI method showed analytical performance significantly better than that of Access AccuTnI+3 method and similar results to those of hs ARCHITECT cTnI method. Moreover, the cTnI concentrations measured with Access hs-cTnI method showed close linear regressions with both Access AccuTnI+3 and ARCHITECT hs-cTnI methods, although there were systematic differences between these methods. There was no difference between cTnI values measured by Access hs-cTnI in heparinized plasma and serum samples, whereas there was a significant difference between cTnI values, respectively measured in EDTA and heparin plasma samples. Access hs-cTnI has analytical sensitivity parameters significantly improved compared to Access AccuTnI+3 method and is similar to those of the high-sensitivity method using ARCHITECT platform.

  7. Teaching Business Simulation Games: Comparing Achievements Frontal Teaching vs. eLearning

    NASA Astrophysics Data System (ADS)

    Bregman, David; Keinan, Gila; Korman, Arik; Raanan, Yossi

    This paper addresses the issue of comparing results achieved by students taught the same course but in two drastically different - a regular, frontal method and an eLearning method. The subject taught required intensive communications among the students, thus making the eLearning students, a priori, less likely to do well in it. The research, comparing the achievements of students in a business simulation game over three semesters, shows that the use of eLearning method did not result in any differences in performance, grades or cooperation, thus strengthening the case for using eLearning in this type of course.

  8. The effectiveness of digital microscopy as a teaching tool in medical laboratory science curriculum.

    PubMed

    Castillo, Demetra

    2012-01-01

    A fundamental component to the practice of Medical Laboratory Science (MLS) is the microscope. While traditional microscopy (TM) is gold standard, the high cost of maintenance has led to an increased demand for alternative methods, such as digital microscopy (DM). Slides embedded with blood specimens are converted into a digital form that can be run with computer driven software. The aim of this study was to investigate the effectiveness of digital microscopy as a teaching tool in the field of Medical Laboratory Science. Participants reviewed known study slides using both traditional and digital microscopy methods and were assessed using both methods. Participants were randomly divided into two groups. Group 1 performed TM as the primary method and DM as the alternate. Group 2 performed DM as the primary and TM as the alternate. Participants performed differentials with their primary method, were assessed with both methods, and then performed differentials with their alternate method. A detailed assessment rubric was created to determine the accuracy of student responses through comparison of clinical laboratory and instructor results. Student scores were reflected as a percentage correct from these methods. This assessment was done over two different classes. When comparing results between methods for each, independent of the primary method used, results were not statistically different. However, when comparing methods between groups, Group 1 (n = 11) (TM = 73.79% +/- 9.19, DM = 81.43% +/- 8.30; paired t10 = 0.182, p < 0.001) showed a significant difference from Group 2 (n = 14) (TM = 85.64% +/- 5.30, DM = 85.91% +/- 7.62; paired t13 = 3.647, p = 0.860). In the subsequent class, results between both groups (n = 13, n = 16, respectively) did not show any significant difference between groups (Group 1 TM = 86.38% +/- 8.17, Group 1 DM = 88.69% +/- 3.86; paired t12 = 1.253, p = 0.234; Group 2 TM = 86.75% +/- 5.37, Group 2 DM = 86.25% +/- 7.01, paired t15 = 0.280, p = 0.784). The data suggest that DM is comparable to TM. DM could be used as an enhancement model after foundational information was provided using TM.

  9. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  10. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    NASA Astrophysics Data System (ADS)

    Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.

    2015-11-01

    In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs) by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-Rocky Mountain Biogenic Aerosol Study) ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  11. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  12. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    PubMed Central

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  13. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    PubMed

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  14. Fusion of PAN and multispectral remote sensing images in shearlet domain by considering regional metrics

    NASA Astrophysics Data System (ADS)

    Poobalasubramanian, Mangalraj; Agrawal, Anupam

    2016-10-01

    The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.

  15. Key frame extraction based on spatiotemporal motion trajectory

    NASA Astrophysics Data System (ADS)

    Zhang, Yunzuo; Tao, Ran; Zhang, Feng

    2015-05-01

    Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.

  16. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  17. Performance of human fecal anaerobe-associated PCR-based assays in a multi-laboratory method evaluation study

    EPA Science Inventory

    A number of PCR-based methods for detecting human fecal material in environmental waters have been developed over the past decade, but these methods have rarely received independent comparative testing. Here, we evaluated ten of these methods (BacH, BacHum-UCD, B. thetaiotaomic...

  18. Detection of fatigue cracks by nondestructive testing methods

    NASA Technical Reports Server (NTRS)

    Anderson, R. T.; Delacy, T. J.; Stewart, R. C.

    1973-01-01

    The effectiveness was assessed of various NDT methods to detect small tight cracks by randomly introducing fatigue cracks into aluminum sheets. The study included optimizing NDT methods calibrating NDT equipment with fatigue cracked standards, and evaluating a number of cracked specimens by the optimized NDT methods. The evaluations were conducted by highly trained personnel, provided with detailed procedures, in order to minimize the effects of human variability. These personnel performed the NDT on the test specimens without knowledge of the flaw locations and reported on the flaws detected. The performance of these tests was measured by comparing the flaws detected against the flaws present. The principal NDT methods utilized were radiographic, ultrasonic, penetrant, and eddy current. Holographic interferometry, acoustic emission monitoring, and replication methods were also applied on a reduced number of specimens. Generally, the best performance was shown by eddy current, ultrasonic, penetrant and holographic tests. Etching provided no measurable improvement, while proof loading improved flaw detectability. Data are shown that quantify the performances of the NDT methods applied.

  19. Extraction of Maltol from Fraser Fir: A Comparison of Microwave-Assisted Extraction and Conventional Heating Protocols for the Organic Chemistry Laboratory

    ERIC Educational Resources Information Center

    Koch, Andrew S.; Chimento, Clio A.; Berg, Allison N.; Mughal, Farah D.; Spencer, Jean-Paul; Hovland, Douglas E.; Mbadugha, Bessie; Hovland, Allan K.; Eller, Leah R.

    2015-01-01

    Two methods for the extraction of maltol from Fraser fir needles are performed and compared in this two-week experiment. A traditional benchtop extraction using dichloromethane is compared to a microwave-assisted extraction using aqueous ethanol. Students perform both procedures and weigh the merits of each technique. In doing so, students see a…

  20. A ground truth based comparative study on clustering of gene expression data.

    PubMed

    Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue

    2008-05-01

    Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.

  1. Limb Position Tolerant Pattern Recognition for Myoelectric Prosthesis Control with Adaptive Sparse Representations From Extreme Learning.

    PubMed

    Betthauser, Joseph L; Hunt, Christopher L; Osborn, Luke E; Masters, Matthew R; Levay, Gyorgy; Kaliki, Rahul R; Thakor, Nitish V

    2018-04-01

    Myoelectric signals can be used to predict the intended movements of an amputee for prosthesis control. However, untrained effects like limb position changes influence myoelectric signal characteristics, hindering the ability of pattern recognition algorithms to discriminate among motion classes. Despite frequent and long training sessions, these deleterious conditional influences may result in poor performance and device abandonment. We present a robust sparsity-based adaptive classification method that is significantly less sensitive to signal deviations resulting from untrained conditions. We compare this approach in the offline and online contexts of untrained upper-limb positions for amputee and able-bodied subjects to demonstrate its robustness compared against other myoelectric classification methods. We report significant performance improvements () in untrained limb positions across all subject groups. The robustness of our suggested approach helps to ensure better untrained condition performance from fewer training conditions. This method of prosthesis control has the potential to deliver real-world clinical benefits to amputees: better condition-tolerant performance, reduced training burden in terms of frequency and duration, and increased adoption of myoelectric prostheses.

  2. The Effect of Initial Knee Angle on Concentric-Only Squat Jump Performance

    ERIC Educational Resources Information Center

    Mitchell, Lachlan J.; Argus, Christos K.; Taylor, Kristie-Lee; Sheppard, Jeremy M.; Chapman, Dale W.

    2017-01-01

    Purpose: There is uncertainty as to which knee angle during a squat jump (SJ) produces maximal jump performance. Importantly, understanding this information will aid in determining appropriate ratios for assessment and monitoring of the explosive characteristics of athletes. Method: This study compared SJ performance across different knee…

  3. Intubation methods by novice intubators in a manikin model.

    PubMed

    O'Carroll, Darragh C; Barnes, Robert L; Aratani, Ashley K; Lee, Dane C; Lau, Christopher A; Morton, Paul N; Yamamoto, Loren G; Berg, Benjamin W

    2013-10-01

    Tracheal Intubation is an important yet difficult skill to learn with many possible methods and techniques. Direct laryngoscopy is the standard method of tracheal intubation, but several instruments have been shown to be less difficult and have better performance characteristics than the traditional direct method. We compared 4 different intubation methods performed by novice intubators on manikins: conventional direct laryngoscopy, video laryngoscopy, Airtraq® laryngoscopy, and fiberoptic laryngoscopy. In addition, we attempted to find a correlation between playing videogames and intubation times in novice intubators. Video laryngoscopy had the best results for both our normal and difficult airway (cervical spine immobilization) manikin scenarios. When video was compared to direct in the normal airway scenario, it had a significantly higher success rate (100% vs 83% P=.02) and shorter intubation times (29.1 ± 27.4 sec vs 45.9 ± 39.5 sec, P=.03). In the difficult airway scenario video laryngoscopy maintained a significantly higher success rate (91% vs 71% P=0.04) and likelihood of success (3.2 ± 1.0 95%CI [2.9-3.5] vs 2.4 ± 0.9 95%CI [2.1-2.7]) when compared to direct laryngoscopy. Participants also reported significantly higher rates of self-confidence (3.5 ± 0.6 95%CI [3.3-3.7]) and ease of use (1.5 ± 0.7 95%CI [1.3-1.8]) with video laryngoscopy compared to all other methods. We found no correlation between videogame playing and intubation methods.

  4. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  5. Method of assessing heterogeneity in images

    DOEpatents

    Jacob, Richard E.; Carson, James P.

    2016-08-23

    A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.

  6. Performance of the LightCycler SeptiFast Test Mgrade in Detecting Microbial Pathogens in Purulent Fluids▿

    PubMed Central

    Sancho-Tello, Silvia; Bravo, Dayana; Borrás, Rafael; Costa, Elisa; Muñoz-Cobo, Beatriz; Navarro, David

    2011-01-01

    The performance of the LightCycler SeptiFast (SF) assay was compared to that of culture methods in the detection of microorganisms in 43 purulent fluids from patients with pyogenic infections. The SF assay was more sensitive than the culture methods (86% versus 61%, respectively), irrespective of whether the infections were mono- or polymicrobial. PMID:21715593

  7. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  8. Transient multivariable sensor evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Heifetz, Alexander

    A method and system for performing transient multivariable sensor evaluation. The method and system includes a computer system for identifying a model form, providing training measurement data, generating a basis vector, monitoring system data from sensor, loading the system data in a non-transient memory, performing an estimation to provide desired data and comparing the system data to the desired data and outputting an alarm for a defective sensor.

  9. NAA For Human Serum Analysis: Comparison With Conventional Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, Laura C.; Zamboni, Cibele B.; Medeiros, Jose A. G.

    2010-08-04

    Instrumental and Comparator methods of Neutron Activation Analysis (NAA) were applied to determine elements of clinical relevancy in serum samples of adult population (Sao Paulo city, Brazil). A comparison with the conventional analyses, Colorimetric for calcium, Titrymetric for chlorine and Ion Specific Electrode for sodium and potassium determination were also performed permitting a discussion about the performance of NAA methods for clinical chemistry research.

  10. Patient-specific 3D models created by 3D imaging system or bi-planar imaging coupled with Moiré-Fringe projections: a comparative study of accuracy and reliability on spinal curvatures and vertebral rotation data.

    PubMed

    Hocquelet, Arnaud; Cornelis, François; Jirot, Anna; Castaings, Laurent; de Sèze, Mathieu; Hauger, Olivier

    2016-10-01

    The aim of this study is to compare the accuracy and reliability of spinal curvatures and vertebral rotation data based on patient-specific 3D models created by 3D imaging system or by bi-planar imaging coupled with Moiré-Fringe projections. Sixty-two consecutive patients from a single institution were prospectively included. For each patient, frontal and sagittal calibrated low-dose bi-planar X-rays were performed and coupled simultaneously with an optical Moiré back surface-based technology. The 3D reconstructions of spine and pelvis were performed independently by one radiologist and one technician in radiology using two different semi-automatic methods using 3D radio-imaging system (method 1) or bi-planar imaging coupled with Moiré projections (method 2). Both methods were compared using Bland-Altman analysis, and reliability using intraclass correlation coefficient (ICC). ICC showed good to very good agreement. Between the two techniques, the maximum 95 % prediction limits was -4.9° degrees for the measurements of spinal coronal curves and less than 5° for other parameters. Inter-rater reliability was excellent for all parameters across both methods, except for axial rotation with method 2 for which ICC was fair. Method 1 was faster for reconstruction time than method 2 for both readers (13.4 vs. 20.7 min and 10.6 vs. 13.9 min; p = 0.0001). While a lower accuracy was observed for the evaluation of the axial rotation, bi-planar imaging coupled with Moiré-Fringe projections may be an accurate and reliable tool to perform 3D reconstructions of the spine and pelvis.

  11. A simulation study of nonparametric total deviation index as a measure of agreement based on quantile regression.

    PubMed

    Lin, Lawrence; Pan, Yi; Hedayat, A S; Barnhart, Huiman X; Haber, Michael

    2016-01-01

    Total deviation index (TDI) captures a prespecified quantile of the absolute deviation of paired observations from raters, observers, methods, assays, instruments, etc. We compare the performance of TDI using nonparametric quantile regression to the TDI assuming normality (Lin, 2000). This simulation study considers three distributions: normal, Poisson, and uniform at quantile levels of 0.8 and 0.9 for cases with and without contamination. Study endpoints include the bias of TDI estimates (compared with their respective theoretical values), standard error of TDI estimates (compared with their true simulated standard errors), and test size (compared with 0.05), and power. Nonparametric TDI using quantile regression, although it slightly underestimates and delivers slightly less power for data without contamination, works satisfactorily under all simulated cases even for moderate (say, ≥40) sample sizes. The performance of the TDI based on a quantile of 0.8 is in general superior to that of 0.9. The performances of nonparametric and parametric TDI methods are compared with a real data example. Nonparametric TDI can be very useful when the underlying distribution on the difference is not normal, especially when it has a heavy tail.

  12. Effects of eye artifact removal methods on single trial P300 detection, a comparative study.

    PubMed

    Ghaderi, Foad; Kim, Su Kyoung; Kirchner, Elsa Andrea

    2014-01-15

    Electroencephalographic signals are commonly contaminated by eye artifacts, even if recorded under controlled conditions. The objective of this work was to quantitatively compare standard artifact removal methods (regression, filtered regression, Infomax, and second order blind identification (SOBI)) and two artifact identification approaches for independent component analysis (ICA) methods, i.e. ADJUST and correlation. To this end, eye artifacts were removed and the cleaned datasets were used for single trial classification of P300 (a type of event related potentials elicited using the oddball paradigm). Statistical analysis of the results confirms that the combination of Infomax and ADJUST provides a relatively better performance (0.6% improvement on average of all subject) while the combination of SOBI and correlation performs the worst. Low-pass filtering the data at lower cutoffs (here 4 Hz) can also improve the classification accuracy. Without requiring any artifact reference channel, the combination of Infomax and ADJUST improves the classification performance more than the other methods for both examined filtering cutoffs, i.e., 4 Hz and 25 Hz. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Assessing the effect of administering probiotics in water or as a feed supplement on broiler performance and immune response.

    PubMed

    Karimi Torshizi, M A; Moghaddam, A R; Rahimi, Sh; Mojgani, N

    2010-04-01

    1. Two routes of probiotic administration in broiler farms, in water and in feed, were compared using 360 one-day-old male broiler chickens. Controls received no probiotics or antimicrobials. The water group received a probiotic preparation at a rate of 0.5 g/l, and the feed group received it at an inclusion rate of 1 g/kg. 2. Performance of broilers in terms body weight gain (BWG), feed intake (FI) and feed conversion ratio (FCR) improved when probiotic was provided via drinking water, compared to the control and feed groups. Probiotic administration reduced plasma cholesterol and triglyceride concentrations. 3. Spleen (28 and 42 d) and bursa (42 d) relative weights were influenced by method of probiotic administration, which also improved T-cell dependent skin thickness response to phytohaemagglutinin (PHA) injection. The effect of challenge by dinitrochlorobenzene (DNCB) depended on the method of probiotic administration. 4. The method of probiotic administration can influence the performance and immune competence of birds, and administration via drinking water appears to be superior to the more conventional in-feed supplementation method.

  14. Multi-Laboratory Study of Five Methods for the Determination of Brevetoxins in Shellfish Tissue Extracts.

    PubMed

    Dickey, Robert W; Plakas, Steven M; Jester, Edward L E; El Said, Kathleen R; Johannessen, Jan N; Flewelling, Leanne J; Scott, Paula; Hammond, Dan G; Van Dolah, Frances M; Leighfield, Tod A; Bottein Dachraoui, Marie-Yasmine; Ramsdell, John S; Pierce, Richard H; Henry, Mike S; Poli, Mark A; Walker, Calvin; Kurtz, Jan; Naar, Jerome; Baden, Daniel G; Musser, Steve M; White, Kevin D; Truman, Penelope; Miller, Aaron; Hawryluk, Timothy P; Wekell, Marleen M; Stirling, David; Quilliam, Michael A; Lee, Jung K

    A thirteen-laboratory comparative study tested the performance of four methods as alternatives to mouse bioassay for the determination of brevetoxins in shellfish. The methods were N2a neuroblastoma cell assay, two variations of the sodium channel receptor binding assay, competitive ELISA, and LC/MS. Three to five laboratories independently performed each method using centrally prepared spiked and naturally incurred test samples. Competitive ELISA and receptor binding (96-well format) compared most favorably with mouse bioassay. Between-laboratory relative standard deviations (RSDR) ranged from 10 to 20% for ELISA and 14 to 31% for receptor binding. Within-laboratory (RSDr) ranged from 6 to 15% for ELISA, and 5 to 31% for receptor binding. Cell assay was extremely sensitive but data variation rendered it unsuitable for statistical treatment. LC/MS performed as well as ELISA on spiked test samples but was inordinately affected by lack of toxin-metabolite standards, uniform instrumental parameters, or both, on incurred test samples. The ELISA and receptor binding assay are good alternatives to mouse bioassay for the determination of brevetoxins in shellfish.

  15. Discrimination of edible oils and fats by combination of multivariate pattern recognition and FT-IR spectroscopy: a comparative study between different modeling methods.

    PubMed

    Javidnia, Katayoun; Parish, Maryam; Karimi, Sadegh; Hemmateenejad, Bahram

    2013-03-01

    By using FT-IR spectroscopy, many researchers from different disciplines enrich the experimental complexity of their research for obtaining more precise information. Moreover chemometrics techniques have boosted the use of IR instruments. In the present study we aimed to emphasize on the power of FT-IR spectroscopy for discrimination between different oil samples (especially fat from vegetable oils). Also our data were used to compare the performance of different classification methods. FT-IR transmittance spectra of oil samples (Corn, Colona, Sunflower, Soya, Olive, and Butter) were measured in the wave-number interval of 450-4000 cm(-1). Classification analysis was performed utilizing PLS-DA, interval PLS-DA, extended canonical variate analysis (ECVA) and interval ECVA methods. The effect of data preprocessing by extended multiplicative signal correction was investigated. Whilst all employed method could distinguish butter from vegetable oils, iECVA resulted in the best performances for calibration and external test set with 100% sensitivity and specificity. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Techniques for cesarean section.

    PubMed

    Hofmeyr, Justus G; Novikova, Natalia; Mathai, Matthews; Shah, Archana

    2009-11-01

    The effects of complete methods of cesarean section (CS) were compared. Metaanalysis of randomized controlled trials of intention to perform CS using different techniques was carried out. Joel-Cohen-based CS compared with Pfannenstiel CS was associated with reduced blood loss, operating time, time to oral intake, fever, duration of postoperative pain, analgesic injections, and time from skin incision to birth of the baby. Misgav-Ladach compared with the traditional method was associated with reduced blood loss, operating time, time to mobilization, and length of postoperative stay for the mother. Joel-Cohen-based methods have advantages compared with Pfannenstiel and traditional (lower midline) CS techniques. However, these trials do not provide information on serious and long-term outcomes.

  17. Inference Control Mechanism for Statistical Database: Frequency-Imposed Data Distortions.

    ERIC Educational Resources Information Center

    Liew, Chong K.; And Others

    1985-01-01

    Introduces two data distortion methods (Frequency-Imposed Distortion, Frequency-Imposed Probability Distortion) and uses a Monte Carlo study to compare their performance with that of other distortion methods (Point Distortion, Probability Distortion). Indications that data generated by these two methods produce accurate statistics and protect…

  18. Prospective performance evaluation of selected common virtual screening tools. Case study: Cyclooxygenase (COX) 1 and 2.

    PubMed

    Kaserer, Teresa; Temml, Veronika; Kutil, Zsofia; Vanek, Tomas; Landa, Premysl; Schuster, Daniela

    2015-01-01

    Computational methods can be applied in drug development for the identification of novel lead candidates, but also for the prediction of pharmacokinetic properties and potential adverse effects, thereby aiding to prioritize and identify the most promising compounds. In principle, several techniques are available for this purpose, however, which one is the most suitable for a specific research objective still requires further investigation. Within this study, the performance of several programs, representing common virtual screening methods, was compared in a prospective manner. First, we selected top-ranked virtual screening hits from the three methods pharmacophore modeling, shape-based modeling, and docking. For comparison, these hits were then additionally predicted by external pharmacophore- and 2D similarity-based bioactivity profiling tools. Subsequently, the biological activities of the selected hits were assessed in vitro, which allowed for evaluating and comparing the prospective performance of the applied tools. Although all methods performed well, considerable differences were observed concerning hit rates, true positive and true negative hits, and hitlist composition. Our results suggest that a rational selection of the applied method represents a powerful strategy to maximize the success of a research project, tightly linked to its aims. We employed cyclooxygenase as application example, however, the focus of this study lied on highlighting the differences in the virtual screening tool performances and not in the identification of novel COX-inhibitors. Copyright © 2015 The Authors. Published by Elsevier Masson SAS.. All rights reserved.

  19. [A comparative evaluation of the methods for determining nitrogen dioxide in an industrial environment].

    PubMed

    Panev, T

    1991-01-01

    The present work has the purpose to make a comparative evaluation of the different types detector tubes--for analysis, long-term and passive for determination of NO2 and the results to be compared, with those received by the spectrophotometric method and the reagent of Zaltsman. Studies were performed in the hall of the garage for repair of diesel buses during one working shift. The results point out that the analysing tubes for NO2 give good results with the spectrophotometric method. The measured average-shift concentrations of NO2 by long-term and passive tubes are juxtaposed with the average-received values with the analytical tubes and with the analytical method.

  20. Medical image segmentation by combining graph cuts and oriented active appearance models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.

  1. U.S. Geological Survey experience with the residual absolutes method

    USGS Publications Warehouse

    Worthington, E. William; Matzka, Jurgen

    2017-01-01

    The U.S. Geological Survey (USGS) Geomagnetism Program has developed and tested the residual method of absolutes, with the assistance of the Danish Technical University's (DTU) Geomagnetism Program. Three years of testing were performed at College Magnetic Observatory (CMO), Fairbanks, Alaska, to compare the residual method with the null method. Results show that the two methods compare very well with each other and both sets of baseline data were used to process the 2015 definitive data. The residual method will be implemented at the other USGS high-latitude geomagnetic observatories in the summer of 2017 and 2018.

  2. Performance assessment of static lead-lag feedforward controllers for disturbance rejection in PID control loops.

    PubMed

    Yu, Zhenpeng; Wang, Jiandong

    2016-09-01

    This paper assesses the performance of feedforward controllers for disturbance rejection in univariate feedback plus feedforward control loops. The structures of feedback and feedforward controllers are confined to proportional-integral-derivative and static-lead-lag forms, respectively, and the effects of feedback controllers are not considered. The integral squared error (ISE) and total squared variation (TSV) are used as performance metrics. A performance index is formulated by comparing the current ISE and TSV metrics to their own lower bounds as performance benchmarks. A controller performance assessment (CPA) method is proposed to calculate the performance index from measurements. The proposed CPA method resolves two critical limitations in the existing CPA methods, in order to be consistent with industrial scenarios. Numerical and experimental examples illustrate the effectiveness of the obtained results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Comparative analysis of dynamic pricing strategies for managed lanes.

    DOT National Transportation Integrated Search

    2015-06-01

    The objective of this research is to investigate and compare the performances of different : dynamic pricing strategies for managed lanes facilities. These pricing strategies include real-time : traffic responsive methods, as well as refund options a...

  4. A method for performance comparison of polycentric knees and its application to the design of a knee for developing countries.

    PubMed

    Anand, T S; Sujatha, S

    2017-08-01

    Polycentric knees for transfemoral prostheses have a variety of geometries, but a survey of literature shows that there are few ways of comparing their performance. Our objective was to present a method for performance comparison of polycentric knee geometries and design a new geometry. In this work, we define parameters to compare various commercially available prosthetic knees in terms of their stability, toe clearance, maximum flexion, and so on and optimize the parameters to obtain a new knee design. We use the defined parameters and optimization to design a new knee geometry that provides the greater stability and toe clearance necessary to navigate uneven terrain which is typically encountered in developing countries. Several commercial knees were compared based on the defined parameters to determine their suitability for uneven terrain. A new knee was designed based on optimization of these parameters. Preliminary user testing indicates that the new knee is very stable and easy to use. The methodology can be used for better knee selection and design of more customized knee geometries. Clinical relevance The method provides a tool to aid in the selection and design of polycentric knees for transfemoral prostheses.

  5. Comparison of radiofrequency and transoral robotic surgery in obstructive sleep apnea syndrome treatment.

    PubMed

    Aynacı, Engin; Karaman, Murat; Kerşin, Burak; Fındık, Mahmut Ozan

    2018-05-01

    Radiofrequency tissue ablation (RFTA) and transoral robotic surgery (TORS) are the methods used in OSAS surgery. We also aimed to compare the advantages and disadvantages of RF and TORS as treatment methods applied in OSAS patients in terms of many parameters, especially apnea hypopnea index (AHI). Patients were classified by performing a detailed examination and evaluation before surgery. 20 patients treated with anterior palatoplasty and uvulectomy -/+ tonsillectomy + RFTA (17 males, 3 females) and 20 patients treated with anterior palatoplasty and uvulectomy -/+  tonsillectomy + TORS (16 males, 4 females) were included in the study. PSG was performed preoperatively and postoperatively in all patients and Epworth sleepiness questionnaire was applied. All operations were performed by the same surgeon and these surgical methods -RF and TORS- were compared in terms of many parameters. When the patients treated with RF and TORS were compared in operation time, length of hospitalization and duration of transition to oral feeding; all parameters were significantly greater in the patients treated with TORS. TORS technique was found to be more successful than RF in terms of reduction of AHI value, correcting minimum arterial oxygen saturation value and decreasing Epworth Sleepiness Scale score.

  6. Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test.

    PubMed

    Pszczola, Marek; Jaczewski, Mariusz; Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary

    2018-01-10

    Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from -20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis.

  7. Evaluation of Asphalt Mixture Low-Temperature Performance in Bending Beam Creep Test

    PubMed Central

    Rys, Dawid; Jaskula, Piotr; Szydlowski, Cezary

    2018-01-01

    Low-temperature cracking is one of the most common road pavement distress types in Poland. While bitumen performance can be evaluated in detail using bending beam rheometer (BBR) or dynamic shear rheometer (DSR) tests, none of the normalized test methods gives a comprehensive representation of low-temperature performance of the asphalt mixtures. This article presents the Bending Beam Creep test performed at temperatures from −20 °C to +10 °C in order to evaluate the low-temperature performance of asphalt mixtures. Both validation of the method and its utilization for the assessment of eight types of wearing courses commonly used in Poland were described. The performed test indicated that the source of bitumen and its production process (and not necessarily only bitumen penetration) had a significant impact on the low-temperature performance of the asphalt mixtures, comparable to the impact of binder modification (neat, polymer-modified, highly modified) and the aggregate skeleton used in the mixture (Stone Mastic Asphalt (SMA) vs. Asphalt Concrete (AC)). Obtained Bending Beam Creep test results were compared with the BBR bitumen test. Regression analysis confirmed that performing solely bitumen tests is insufficient for comprehensive low-temperature performance analysis. PMID:29320443

  8. Mapping Health Data: Improved Privacy Protection With Donut Method Geomasking

    PubMed Central

    Hampton, Kristen H.; Fitch, Molly K.; Allshouse, William B.; Doherty, Irene A.; Gesink, Dionne C.; Leone, Peter A.; Serre, Marc L.; Miller, William C.

    2010-01-01

    A major challenge in mapping health data is protecting patient privacy while maintaining the spatial resolution necessary for spatial surveillance and outbreak identification. A new adaptive geomasking technique, referred to as the donut method, extends current methods of random displacement by ensuring a user-defined minimum level of geoprivacy. In donut method geomasking, each geocoded address is relocated in a random direction by at least a minimum distance, but less than a maximum distance. The authors compared the donut method with current methods of random perturbation and aggregation regarding measures of privacy protection and cluster detection performance by masking multiple disease field simulations under a range of parameters. Both the donut method and random perturbation performed better than aggregation in cluster detection measures. The performance of the donut method in geoprivacy measures was at least 42.7% higher and in cluster detection measures was less than 4.8% lower than that of random perturbation. Results show that the donut method provides a consistently higher level of privacy protection with a minimal decrease in cluster detection performance, especially in areas where the risk to individual geoprivacy is greatest. PMID:20817785

  9. Mapping health data: improved privacy protection with donut method geomasking.

    PubMed

    Hampton, Kristen H; Fitch, Molly K; Allshouse, William B; Doherty, Irene A; Gesink, Dionne C; Leone, Peter A; Serre, Marc L; Miller, William C

    2010-11-01

    A major challenge in mapping health data is protecting patient privacy while maintaining the spatial resolution necessary for spatial surveillance and outbreak identification. A new adaptive geomasking technique, referred to as the donut method, extends current methods of random displacement by ensuring a user-defined minimum level of geoprivacy. In donut method geomasking, each geocoded address is relocated in a random direction by at least a minimum distance, but less than a maximum distance. The authors compared the donut method with current methods of random perturbation and aggregation regarding measures of privacy protection and cluster detection performance by masking multiple disease field simulations under a range of parameters. Both the donut method and random perturbation performed better than aggregation in cluster detection measures. The performance of the donut method in geoprivacy measures was at least 42.7% higher and in cluster detection measures was less than 4.8% lower than that of random perturbation. Results show that the donut method provides a consistently higher level of privacy protection with a minimal decrease in cluster detection performance, especially in areas where the risk to individual geoprivacy is greatest.

  10. Credit allocation for research institutes

    NASA Astrophysics Data System (ADS)

    Wang, J.-P.; Guo, Q.; Yang, K.; Han, J.-T.; Liu, J.-G.

    2017-05-01

    It is a challenging work to assess research performance of multiple institutes. Considering that it is unfair to average the credit to the institutes which is in the different order from a paper, in this paper, we present a credit allocation method (CAM) with a weighted order coefficient for multiple institutes. The results for the APS dataset with 18987 institutes show that top-ranked institutes obtained by the CAM method correspond to well-known universities or research labs with high reputation in physics. Moreover, we evaluate the performance of the CAM method when citation links are added or rewired randomly quantified by the Kendall's Tau and Jaccard index. The experimental results indicate that the CAM method has better performance in robustness compared with the total number of citations (TC) method and Shen's method. Finally, we give the first 20 Chinese universities in physics obtained by the CAM method. However, this method is valid for any other branch of sciences, not just for physics. The proposed method also provides universities and policy makers an effective tool to quantify and balance the academic performance of university.

  11. The importance of quality control in validating concentrations of contaminants of emerging concern in source and treated drinking water samples.

    PubMed

    Batt, Angela L; Furlong, Edward T; Mash, Heath E; Glassmeyer, Susan T; Kolpin, Dana W

    2017-02-01

    A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. Published by Elsevier B.V.

  12. Impact of Hybrid Delivery of Education on Student Academic Performance and the Student Experience

    PubMed Central

    Nutter, Douglas A.; Charneski, Lisa; Butko, Peter

    2009-01-01

    Objectives To compare student academic performance and the student experience in the first-year doctor of pharmacy (PharmD) program between the main and newly opened satellite campuses of the University of Maryland. Methods Student performance indicators including graded assessments, course averages, cumulative first-year grade point average (GPA), and introductory pharmacy practice experience (IPPE) evaluations were analyzed retrospectively. Student experience indicators were obtained via an online survey instrument and included involvement in student organizations; time-budgeting practices; and stress levels and their perceived effect on performance. Results Graded assessments, course averages, GPA, and IPPE evaluations were indistinguishable between campuses. Students' time allocation was not different between campuses, except for time spent attending class and watching lecture videos. There was no difference between students' stress levels at each campus. Conclusions The implementation of a satellite campus to expand pharmacy education yielded academic performance and student engagement comparable to those from traditional delivery methods. PMID:19960080

  13. Multiple-instance ensemble learning for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Ergul, Ugur; Bilgin, Gokhan

    2017-10-01

    An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.

  14. Pulse compression of harmonic chirp signals using the fractional fourier transform.

    PubMed

    Arif, M; Cowell, D M J; Freear, S

    2010-06-01

    In ultrasound harmonic imaging with chirp-coded excitation, a harmonic matched filter (HMF) is typically used on the received signal to perform pulse compression of the second harmonic component (SHC) to recover signal axial resolution. Designing the HMF for the compression of the SHC is a problematic issue because it requires optimal window selection. In the compressed second harmonic signal, the sidelobe level may increase and the mainlobe width (MLW) widen under a mismatched condition, resulting in loss of axial resolution. We propose the use of the fractional Fourier transform (FrFT) as an alternative tool to perform compression of the chirp-coded SHC generated as a result of the nonlinear propagation of an ultrasound signal. Two methods are used to experimentally assess the performance benefits of the FrFT technique over the HMF techniques. The first method uses chirp excitation with central frequency of 2.25 MHz and bandwidth of 1 MHz. The second method uses chirp excitation with pulse inversion to increase the bandwidth to 2 MHz. In this study, experiments were performed in a water tank with a single-element transducer mounted coaxially with a hydrophone in a pitch-catch configuration. Results are presented that indicate that the FrFT can perform pulse compression of the second harmonic chirp component, with a 14% reduction in the MLW of the compressed signal when compared with the HMF. Also, the FrFT provides at least 23% reduction in the MLW of the compressed signal when compared with the harmonic mismatched filter (HMMF). The FrFT maintains comparable peak and integrated sidelobe levels when compared with the HMF and HMMF techniques. Copyright 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  15. Validating fatty acid intake as estimated by an FFQ: how does the 24 h recall perform as reference method compared with the duplicate portion?

    PubMed

    Trijsburg, Laura; de Vries, Jeanne Hm; Hollman, Peter Ch; Hulshof, Paul Jm; van 't Veer, Pieter; Boshuizen, Hendriek C; Geelen, Anouk

    2018-05-08

    To compare the performance of the commonly used 24 h recall (24hR) with the more distinct duplicate portion (DP) as reference method for validation of fatty acid intake estimated with an FFQ. Intakes of SFA, MUFA, n-3 fatty acids and linoleic acid (LA) were estimated by chemical analysis of two DP and by on average five 24hR and two FFQ. Plasma n-3 fatty acids and LA were used to objectively compare ranking of individuals based on DP and 24hR. Multivariate measurement error models were used to estimate validity coefficients and attenuation factors for the FFQ with the DP and 24hR as reference methods. Wageningen, the Netherlands. Ninety-two men and 106 women (aged 20-70 years). Validity coefficients for the fatty acid estimates by the FFQ tended to be lower when using the DP as reference method compared with the 24hR. Attenuation factors for the FFQ tended to be slightly higher based on the DP than those based on the 24hR as reference method. Furthermore, when using plasma fatty acids as reference, the DP showed comparable to slightly better ranking of participants according to their intake of n-3 fatty acids (0·33) and n-3:LA (0·34) than the 24hR (0·22 and 0·24, respectively). The 24hR gives only slightly different results compared with the distinctive but less feasible DP, therefore use of the 24hR seems appropriate as the reference method for FFQ validation of fatty acid intake.

  16. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.

    PubMed

    Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.

  17. Comparative study of organic transistors with different graphene electrodes fabricated using a simple patterning method

    NASA Astrophysics Data System (ADS)

    Kang, Narae; Smith, Christian W.; Ishigami, Masa; Khondaker, Saiful I.

    2017-12-01

    The performance of organic field-effect transistors (OFETs) can be greatly limited due to the inefficient charge injection caused by the large interfacial barrier at the metal/organic semiconductor interface. To improve this, two-dimensional graphene films have been suggested as alternative electrode materials; however, a comparative study of OFET performances using different types of graphene electrodes has not been systematically investigated. Here, we present a comparative study on the performance of pentacene OFETs using chemical vapor deposition (CVD) grown graphene and reduced graphene oxide (RGO) as electrodes. The large area electrodes were patterned using a simple and environmentally benign patterning technique. Although both the CVD graphene and RGO electrodes showed enhanced device performance compared to metal electrodes, we found the maximum performance enhancement from CVD grown graphene electrodes. Our study suggests that, in addition to the strong π-π interaction at the graphene/organic interface, the higher conductivity of the electrodes also plays an important role in the performance of OFETs.

  18. A SOFTWARE TOOL TO COMPARE MEASURED AND SIMULATED BUILDING ENERGY PERFORMANCE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maile, Tobias; Bazjanac, Vladimir; O'Donnell, James

    2011-11-01

    Building energy performance is often inadequate when compared to design goals. To link design goals to actual operation one can compare measured with simulated energy performance data. Our previously developed comparison approach is the Energy Performance Comparison Methodology (EPCM), which enables the identification of performance problems based on a comparison of measured and simulated performance data. In context of this method, we developed a software tool that provides graphing and data processing capabilities of the two performance data sets. The software tool called SEE IT (Stanford Energy Efficiency Information Tool) eliminates the need for manual generation of data plots andmore » data reformatting. SEE IT makes the generation of time series, scatter and carpet plots independent of the source of data (measured or simulated) and provides a valuable tool for comparing measurements with simulation results. SEE IT also allows assigning data points on a predefined building object hierarchy and supports different versions of simulated performance data. This paper briefly introduces the EPCM, describes the SEE IT tool and illustrates its use in the context of a building case study.« less

  19. Multiple Criteria and Multiple Periods Performance Analysis: The Comparison of North African Railways

    NASA Astrophysics Data System (ADS)

    Sabri, Karim; Colson, Gérard E.; Mbangala, Augustin M.

    2008-10-01

    Multi-period differences of technical and financial performances are analysed by comparing five North African railways over the period (1990-2004). A first approach is based on the Malmquist DEA TFP index for measuring the total factors productivity change, decomposed into technical efficiency change and technological changes. A multiple criteria analysis is also performed using the PROMETHEE II method and the software ARGOS. These methods provide complementary detailed information, especially by discriminating the technological and management progresses by Malmquist and the two dimensions of performance by Promethee: that are the service to the community and the enterprises performances, often in conflict.

  20. Comparison of traditional advanced cardiac life support (ACLS) course instruction vs. a scenario-based, performance oriented team instruction (SPOTI) method for Korean paramedic students.

    PubMed

    Lee, Christopher C; Im, Mark; Kim, Tae Min; Stapleton, Edward R; Kim, Kyuseok; Suh, Gil Joon; Singer, Adam J; Henry, Mark C

    2010-01-01

    Current Advanced Cardiac Life Support (ACLS) course instruction involves a 2-day course with traditional lectures and limited team interaction. We wish to explore the advantages of a scenario-based performance-oriented team instruction (SPOTI) method to implement core ACLS skills for non-English-speaking international paramedic students. The objective of this study was to determine if scenario-based, performance-oriented team instruction (SPOTI) improves educational outcomes for the ACLS instruction of Korean paramedic students. Thirty Korean paramedic students were randomly selected into two groups. One group of 15 students was taught the traditional ACLS course. The other 15 students were instructed using a SPOTI method. Each group was tested using ACLS megacode examinations endorsed by the American Heart Association. All 30 students passed the ACLS megacode examination. In the traditional ACLS study group an average of 85% of the core skills were met. In the SPOTI study group an average of 93% of the core skills were met. In particular, the SPOTI study group excelled at physical examination skills such as airway opening, assessment of breathing, signs of circulation, and compression rates. In addition, the SPOTI group performed with higher marks on rhythm recognition compared to the traditional group. The traditional group performed with higher marks at providing proper drug dosages compared to the SPOTI students. However, the students enrolled in the SPOTI method resulted in higher megacode core compliance scores compared to students trained in traditional ACLS course instruction. These differences did not achieve statistical significance due to the small sample size. Copyright 2010 Elsevier Inc. All rights reserved.

  1. Radioanalytical Chemistry for Automated Nuclear Waste Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devol, Timothy A.

    2005-06-01

    Comparison of different pulse shape discrimination methods was performed under two different experimental conditions and the best method was identified. Beta/gamma discrimination of 90Sr/90Y and 137Cs was performed using a phoswich detector made of BC400 (2.5 cm OD x 1.2 cm) and BGO (2.5 cm O.D. x 2.5 cm ) scintillators. Alpha/gamma discrimination of 210Po and 137Cs was performed using a CsI:Tl (2.8 x 1.4 x 1.4 cm3) scintillation crystal. The pulse waveforms were digitized with a DGF-4c (X-Ray Instrumentation Associates) and analyzed offline with IGOR Pro software (Wavemetrics, Inc.). The four pulse shape discrimination methods that were compared include:more » rise time discrimination, digital constant fraction discrimination, charge ratio, and constant time discrimination (CTD) methods. The CTD method is the ratio of the pulse height at a particular time after the beginning of the pulse to the time at the maximum pulse height. The charge comparison method resulted in a Figure of Merit (FoM) of 3.3 (9.9 % spillover) and 3.7 (0.033 % spillover) for the phoswich and the CsI:Tl scintillator setups, respectively. The CTD method resulted in a FoM of 3.9 (9.2 % spillover) and 3.2 (0.25 % spillover), respectively. Inverting the pulse shape data typically resulted in a significantly higher FoM than conventional methods, but there was no reduction in % spillover values. This outcome illustrates that the FoM may not be a good scheme for the quantification of a system to perform pulse shape discrimination. Comparison of several pulse shape discrimination (PSD) methods was performed as a means to compare traditional analog and digital PSD methods on the same scintillation pulses. The X-ray Instrumentation Associates DGF-4C (40 Msps, 14-bit) was used to digitize waveforms from a CsI:Tl crystal and BC400/BGO phoswich detector.« less

  2. The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images.

    PubMed

    Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M

    2017-05-01

    To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.

  3. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  4. Corrosion performance tests for reinforcing steel in concrete : test procedures.

    DOT National Transportation Integrated Search

    2009-09-01

    The existing test method to assess the corrosion performance of reinforcing steel embedded in concrete, mainly : ASTM G109, is labor intensive, time consuming, slow to provide comparative results, and often expensive. : However, corrosion of reinforc...

  5. Corrosion performance tests for reinforcing steel in concrete : technical report.

    DOT National Transportation Integrated Search

    2009-10-01

    The existing test method used to assess the corrosion performance of reinforcing steel embedded in : concrete, mainly ASTM G 109, is labor intensive, time consuming, slow to provide comparative results, : and can be expensive. However, with corrosion...

  6. Simulated Annealing in the Variable Landscape

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Kim, Chang Ju

    An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.

  7. A field study of selected U.S. Geological Survey analytical methods for measuring pesticides in filtered stream water, June - September 2012

    USGS Publications Warehouse

    Martin, Jeffrey D.; Norman, Julia E.; Sandstrom, Mark W.; Rose, Claire E.

    2017-09-06

    U.S. Geological Survey monitoring programs extensively used two analytical methods, gas chromatography/mass spectrometry and liquid chromatography/mass spectrometry, to measure pesticides in filtered water samples during 1992–2012. In October 2012, the monitoring programs began using direct aqueous-injection liquid chromatography tandem mass spectrometry as a new analytical method for pesticides. The change in analytical methods, however, has the potential to inadvertently introduce bias in analysis of datasets that span the change.A field study was designed to document performance of the new method in a variety of stream-water matrices and to quantify any potential changes in measurement bias or variability that could be attributed to changes in analytical methods. The goals of the field study were to (1) summarize performance (bias and variability of pesticide recovery) of the new method in a variety of stream-water matrices; (2) compare performance of the new method in laboratory blank water (laboratory reagent spikes) to that in a variety of stream-water matrices; (3) compare performance (analytical recovery) of the new method to that of the old methods in a variety of stream-water matrices; (4) compare pesticide detections and concentrations measured by the new method to those of the old methods in a variety of stream-water matrices; (5) compare contamination measured by field blank water samples in old and new methods; (6) summarize the variability of pesticide detections and concentrations measured by the new method in field duplicate water samples; and (7) identify matrix characteristics of environmental water samples that adversely influence the performance of the new method. Stream-water samples and a variety of field quality-control samples were collected at 48 sites in the U.S. Geological Survey monitoring networks during June–September 2012. Stream sites were located across the United States and included sites in agricultural and urban land-use settings, as well as sites on major rivers.The results of the field study identified several challenges for the analysis and interpretation of data analyzed by both old and new methods, particularly when data span the change in methods and are combined for analysis of temporal trends in water quality. The main challenges identified are large (greater than 30 percent), statistically significant differences in analytical recovery, detection capability, and (or) measured concentrations for selected pesticides. These challenges are documented and discussed, but specific guidance or statistical methods to resolve these differences in methods are beyond the scope of the report. The results of the field study indicate that the implications of the change in analytical methods must be assessed individually for each pesticide and method.Understanding the possible causes of the systematic differences in concentrations between methods that remain after recovery adjustment might be necessary to determine how to account for the differences in data analysis. Because recoveries for each method are independently determined from separate reference standards and spiking solutions, the differences might be due to an error in one of the reference standards or solutions or some other basic aspect of standard procedure in the analytical process. Further investigation of the possible causes is needed, which will lead to specific decisions on how to compensate for these differences in concentrations in data analysis. In the event that further investigations do not provide insight into the causes of systematic differences in concentrations between methods, the authors recommend continuing to collect and analyze paired environmental water samples by both old and new methods. This effort should be targeted to seasons, sites, and expected concentrations to supplement those concentrations already assessed and to compare the ongoing analytical recovery of old and new methods to those observed in the summer and fall of 2012.

  8. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data.

    PubMed

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-05-15

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection.

  9. Sea Ice Detection Based on an Improved Similarity Measurement Method Using Hyperspectral Data

    PubMed Central

    Han, Yanling; Li, Jue; Zhang, Yun; Hong, Zhonghua; Wang, Jing

    2017-01-01

    Hyperspectral remote sensing technology can acquire nearly continuous spectrum information and rich sea ice image information, thus providing an important means of sea ice detection. However, the correlation and redundancy among hyperspectral bands reduce the accuracy of traditional sea ice detection methods. Based on the spectral characteristics of sea ice, this study presents an improved similarity measurement method based on linear prediction (ISMLP) to detect sea ice. First, the first original band with a large amount of information is determined based on mutual information theory. Subsequently, a second original band with the least similarity is chosen by the spectral correlation measuring method. Finally, subsequent bands are selected through the linear prediction method, and a support vector machine classifier model is applied to classify sea ice. In experiments performed on images of Baffin Bay and Bohai Bay, comparative analyses were conducted to compare the proposed method and traditional sea ice detection methods. Our proposed ISMLP method achieved the highest classification accuracies (91.18% and 94.22%) in both experiments. From these results the ISMLP method exhibits better performance overall than other methods and can be effectively applied to hyperspectral sea ice detection. PMID:28505135

  10. Advancing hypoxic training in team sports: from intermittent hypoxic training to repeated sprint training in hypoxia.

    PubMed

    Faiss, Raphaël; Girard, Olivier; Millet, Grégoire P

    2013-12-01

    Over the past two decades, intermittent hypoxic training (IHT), that is, a method where athletes live at or near sea level but train under hypoxic conditions, has gained unprecedented popularity. By adding the stress of hypoxia during 'aerobic' or 'anaerobic' interval training, it is believed that IHT would potentiate greater performance improvements compared to similar training at sea level. A thorough analysis of studies including IHT, however, leads to strikingly poor benefits for sea-level performance improvement, compared to the same training method performed in normoxia. Despite the positive molecular adaptations observed after various IHT modalities, the characteristics of optimal training stimulus in hypoxia are still unclear and their functional translation in terms of whole-body performance enhancement is minimal. To overcome some of the inherent limitations of IHT (lower training stimulus due to hypoxia), recent studies have successfully investigated a new training method based on the repetition of short (<30 s) 'all-out' sprints with incomplete recoveries in hypoxia, the so-called repeated sprint training in hypoxia (RSH). The aims of the present review are therefore threefold: first, to summarise the main mechanisms for interval training and repeated sprint training in normoxia. Second, to critically analyse the results of the studies involving high-intensity exercises performed in hypoxia for sea-level performance enhancement by differentiating IHT and RSH. Third, to discuss the potential mechanisms underpinning the effectiveness of those methods, and their inherent limitations, along with the new research avenues surrounding this topic.

  11. Recombinant Temporal Aberration Detection Algorithms for Enhanced Biosurveillance

    PubMed Central

    Murphy, Sean Patrick; Burkom, Howard

    2008-01-01

    Objective Broadly, this research aims to improve the outbreak detection performance and, therefore, the cost effectiveness of automated syndromic surveillance systems by building novel, recombinant temporal aberration detection algorithms from components of previously developed detectors. Methods This study decomposes existing temporal aberration detection algorithms into two sequential stages and investigates the individual impact of each stage on outbreak detection performance. The data forecasting stage (Stage 1) generates predictions of time series values a certain number of time steps in the future based on historical data. The anomaly measure stage (Stage 2) compares features of this prediction to corresponding features of the actual time series to compute a statistical anomaly measure. A Monte Carlo simulation procedure is then used to examine the recombinant algorithms’ ability to detect synthetic aberrations injected into authentic syndromic time series. Results New methods obtained with procedural components of published, sometimes widely used, algorithms were compared to the known methods using authentic datasets with plausible stochastic injected signals. Performance improvements were found for some of the recombinant methods, and these improvements were consistent over a range of data types, outbreak types, and outbreak sizes. For gradual outbreaks, the WEWD MovAvg7+WEWD Z-Score recombinant algorithm performed best; for sudden outbreaks, the HW+WEWD Z-Score performed best. Conclusion This decomposition was found not only to yield valuable insight into the effects of the aberration detection algorithms but also to produce novel combinations of data forecasters and anomaly measures with enhanced detection performance. PMID:17947614

  12. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  13. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  14. [Biometric identification method for ECG based on the piecewise linear representation (PLR) and dynamic time warping (DTW)].

    PubMed

    Yang, Licai; Shen, Jun; Bao, Shudi; Wei, Shoushui

    2013-10-01

    To treat the problem of identification performance and the complexity of the algorithm, we proposed a piecewise linear representation and dynamic time warping (PLR-DTW) method for ECG biometric identification. Firstly we detected R peaks to get the heartbeats after denoising preprocessing. Then we used the PLR method to keep important information of an ECG signal segment while reducing the data dimension at the same time. The improved DTW method was used for similarity measurements between the test data and the templates. The performance evaluation was carried out on the two ECG databases: PTB and MIT-BIH. The analystic results showed that compared to the discrete wavelet transform method, the proposed PLR-DTW method achieved a higher accuracy rate which is nearly 8% of rising, and saved about 30% operation time, and this demonstrated that the proposed method could provide a better performance.

  15. Performance of toxicity probability interval based designs in contrast to the continual reassessment method

    PubMed Central

    Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.

    2016-01-01

    Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150

  16. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  17. A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.

    PubMed

    Tabe-Bordbar, Shayan; Emad, Amin; Zhao, Sihai Dave; Sinha, Saurabh

    2018-04-26

    Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.

  18. A comparative study of the effects of Ag2S films prepared by MPD and HRTD methods on the performance of polymer solar cells

    NASA Astrophysics Data System (ADS)

    Zhai, Yong; Li, Fumin; Ling, Lanyun; Chen, Chong

    2016-10-01

    In this work, the Ag2S nanocrystalline thin films are deposited on ITO glass via molecular precursor decomposition (MPD) method and newly developed HRTD method for organic solar cells (ITO/Ag2S/P3HT:PCBM/MoO3/Au) as an electron selective layer and a light absorption material. The surface morphology, structure characterization, and optical property of the Ag2S films prepared by these two methods were compared and the effect of the prepared Ag2S film on the device performance is investigated. It is found that the Ag2S films prepared by HRTD method have lower roughness and better uniformity than the corresponding films prepared by the MPD method. In addition, a more effective and rapid transporting ability for the electrons and holes in the ITO/Ag2S(HRTD, n)/P3HT:PCBM/MoO3/Au cells is found, which reduces the charge recombination, and thus, improves the device performance. The highest efficiency of 3.21% achieved for the ITO/Ag2S(HRTD, 50)/P3HT:PCBM/MoO3/Au cell is 93% higher than that of the ITO/Ag2S(MPD, 2)/P3HT:PCBM/MoO3/Au cell.

  19. LUNGx Challenge for computerized lung nodule classification

    DOE PAGES

    Armato, Samuel G.; Drukker, Karen; Li, Feng; ...

    2016-12-19

    The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less

  20. LUNGx Challenge for computerized lung nodule classification

    PubMed Central

    Armato, Samuel G.; Drukker, Karen; Li, Feng; Hadjiiski, Lubomir; Tourassi, Georgia D.; Engelmann, Roger M.; Giger, Maryellen L.; Redmond, George; Farahani, Keyvan; Kirby, Justin S.; Clarke, Laurence P.

    2016-01-01

    Abstract. The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. Ten groups applied their own methods to 73 lung nodules (37 benign and 36 malignant) that were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. The continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community. PMID:28018939

  1. LUNGx Challenge for computerized lung nodule classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armato, Samuel G.; Drukker, Karen; Li, Feng

    The purpose of this work is to describe the LUNGx Challenge for the computerized classification of lung nodules on diagnostic computed tomography (CT) scans as benign or malignant and report the performance of participants’ computerized methods along with that of six radiologists who participated in an observer study performing the same Challenge task on the same dataset. The Challenge provided sets of calibration and testing scans, established a performance assessment process, and created an infrastructure for case dissemination and result submission. We present ten groups that applied their own methods to 73 lung nodules (37 benign and 36 malignant) thatmore » were selected to achieve approximate size matching between the two cohorts. Area under the receiver operating characteristic curve (AUC) values for these methods ranged from 0.50 to 0.68; only three methods performed statistically better than random guessing. The radiologists’ AUC values ranged from 0.70 to 0.85; three radiologists performed statistically better than the best-performing computer method. The LUNGx Challenge compared the performance of computerized methods in the task of differentiating benign from malignant lung nodules on CT scans, placed in the context of the performance of radiologists on the same task. Lastly, the continued public availability of the Challenge cases will provide a valuable resource for the medical imaging research community.« less

  2. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification

    DOT National Transportation Integrated Search

    2012-03-31

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  3. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification.

    DOT National Transportation Integrated Search

    2012-03-01

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  4. A Comparison of Performance versus Presentation Based Methods of Instructing Pre-service Teachers in Media Competencies.

    ERIC Educational Resources Information Center

    Mattox, Daniel V., Jr.

    Research compared conventional and experimental methods of instruction in a teacher education media course. The conventional method relied upon factual presentations to heterogeneous groups, while the experimental utilized homogeneous clusters of students and stressed individualized instruction. A pretest-posttest, experimental-control group…

  5. The method of educational assessment affects children's neural processing and performance: behavioural and fMRI Evidence

    NASA Astrophysics Data System (ADS)

    Howard, Steven J.; Burianová, Hana; Calleia, Alysha; Fynes-Clinton, Samuel; Kervin, Lisa; Bokosmaty, Sahar

    2017-08-01

    Standardised educational assessments are now widespread, yet their development has given comparatively more consideration to what to assess than how to optimally assess students' competencies. Existing evidence from behavioural studies with children and neuroscience studies with adults suggest that the method of assessment may affect neural processing and performance, but current evidence remains limited. To investigate the impact of assessment methods on neural processing and performance in young children, we used functional magnetic resonance imaging to identify and quantify the neural correlates during performance across a range of current approaches to standardised spelling assessment. Results indicated that children's test performance declined as the cognitive load of assessment method increased. Activation of neural nodes associated with working memory further suggests that this performance decline may be a consequence of a higher cognitive load, rather than the complexity of the content. These findings provide insights into principles of assessment (re)design, to ensure assessment results are an accurate reflection of students' true levels of competency.

  6. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  7. Performance recovery of a class of uncertain non-affine systems with unmodelled dynamics: an indirect dynamic inversion method

    NASA Astrophysics Data System (ADS)

    Yi, Bowen; Lin, Shuyi; Yang, Bo; Zhang, Weidong

    2018-02-01

    This paper presents an output feedback indirect dynamic inversion (IDI) approach for a class of uncertain nonaffine systems with input unmodelled dynamics. Compared with previous approaches to achieve performance recovery, the proposed method aims at dealing with a broader class of nonaffine-in-control systems with triangular structure. An IDI state feedback law is designed first, in which less knowledge of the model plant is needed compared to earlier approximate dynamic inversion methods, thus yielding more robust performance. After that, an extended high-gain observer is designed to accomplish the task with output feedback. Finally, we prove that the designed IDI controller is equivalent to an adaptive proportional-integral (PI) controller, with respect to both time response equivalence and robustness equivalence. The conclusion implies that for the studied strict-feedback non-affine systems with unmodelled dynamics, there always exits a PI controller to stabilise the systems. The effectiveness and benefits of the designed approach are verified by three examples.

  8. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  9. Effects of pre-education combined with a simulation for caring for children with croup on senior nursing students.

    PubMed

    Lee, Myung-Nam; Kang, Kyung-Ah; Park, Sun-Jung; Kim, Shin-Jeong

    2017-06-01

    Educational outcomes, such as knowledge, confidence in performance, ability in nursing practice, and satisfaction with learning methods in caring for children with croup, were compared between groups of students that received education through simulation combined with pre-education, simulation only, and pre-education only. In this quasi-experimental design, the educational intervention for the experimental group was the pre-education modality. Data from a convenience sample of 127 senior nursing students were drawn from three nursing schools in South Korea. There were significant differences in the mean scores of knowledge, confidence in performance, satisfaction with the learning method, and ability in nursing practice between the three groups. Pre-education with simulation significantly enhanced students' knowledge, confidence in performance, ability in nursing practice, and satisfaction with learning methods compared with pre-education or simulation alone. Simulation strategies should focus more on enhancing nursing students' learning outcomes. © 2017 John Wiley & Sons Australia, Ltd.

  10. Comparative study on the performance of textural image features for active contour segmentation.

    PubMed

    Moraru, Luminita; Moldovanu, Simona

    2012-07-01

    We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.

  11. Comparing machine learning and logistic regression methods for predicting hypertension using a combination of gene expression and next-generation sequencing data.

    PubMed

    Held, Elizabeth; Cape, Joshua; Tintle, Nathan

    2016-01-01

    Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.

  12. Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models

    PubMed Central

    Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua

    2017-01-01

    In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862

  13. A New Pulse Pileup Rejection Method Based on Position Shift Identification

    NASA Astrophysics Data System (ADS)

    Gu, Z.; Prout, D. L.; Taschereau, R.; Bai, B.; Chatziioannou, A. F.

    2016-02-01

    Pulse pileup events degrade the signal-to-noise ratio (SNR) of nuclear medicine data. When such events occur in multiplexed detectors, they cause spatial misposition, energy spectrum distortion and degraded timing resolution, which leads to image artifacts. Pulse pileup is pronounced in PETbox4, a bench top PET scanner dedicated to high sensitivity and high resolution imaging of mice. In that system, the combination of high absolute sensitivity, long scintillator decay time (BGO) and highly multiplexed electronics lead to a significant fraction of pulse pileup, reached at lower total activity than for comparable instruments. In this manuscript, a new pulse pileup rejection method named position shift rejection (PSR) is introduced. The performance of PSR is compared with a conventional leading edge rejection (LER) method and with no pileup rejection implemented (NoPR). A comprehensive digital pulse library was developed for objective evaluation and optimization of the PSR and LER, in which pulse waveforms were directly recorded from real measurements exactly representing the signals to be processed. Physical measurements including singles event acquisition, peak system sensitivity and NEMA NU-4 image quality phantom were also performed in the PETbox4 system to validate and compare the different pulse pile-up rejection methods. The evaluation of both physical measurements and model pulse trains demonstrated that the new PSR performs more accurate pileup event identification and avoids erroneous rejection of valid events. For the PETbox4 system, this improvement leads to a significant recovery of sensitivity at low count rates, amounting to about 1/4th of the expected true coincidence events, compared to the LER method. Furthermore, with the implementation of PSR, optimal image quality can be achieved near the peak noise equivalent count rate (NECR).

  14. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  15. Gene length corrected trimmed mean of M-values (GeTMM) processing of RNA-seq data performs similarly in intersample analyses while improving intrasample comparisons.

    PubMed

    Smid, Marcel; Coebergh van den Braak, Robert R J; van de Werken, Harmen J G; van Riet, Job; van Galen, Anne; de Weerd, Vanja; van der Vlugt-Daane, Michelle; Bril, Sandra I; Lalmahomed, Zarina S; Kloosterman, Wigard P; Wilting, Saskia M; Foekens, John A; IJzermans, Jan N M; Martens, John W M; Sieuwerts, Anieta M

    2018-06-22

    Current normalization methods for RNA-sequencing data allow either for intersample comparison to identify differentially expressed (DE) genes or for intrasample comparison for the discovery and validation of gene signatures. Most studies on optimization of normalization methods typically use simulated data to validate methodologies. We describe a new method, GeTMM, which allows for both inter- and intrasample analyses with the same normalized data set. We used actual (i.e. not simulated) RNA-seq data from 263 colon cancers (no biological replicates) and used the same read count data to compare GeTMM with the most commonly used normalization methods (i.e. TMM (used by edgeR), RLE (used by DESeq2) and TPM) with respect to distributions, effect of RNA quality, subtype-classification, recurrence score, recall of DE genes and correlation to RT-qPCR data. We observed a clear benefit for GeTMM and TPM with regard to intrasample comparison while GeTMM performed similar to TMM and RLE normalized data in intersample comparisons. Regarding DE genes, recall was found comparable among the normalization methods, while GeTMM showed the lowest number of false-positive DE genes. Remarkably, we observed limited detrimental effects in samples with low RNA quality. We show that GeTMM outperforms established methods with regard to intrasample comparison while performing equivalent with regard to intersample normalization using the same normalized data. These combined properties enhance the general usefulness of RNA-seq but also the comparability to the many array-based gene expression data in the public domain.

  16. Leaching of indium from obsolete liquid crystal displays: Comparing grinding with electrical disintegration in context of LCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodbiba, Gjergj, E-mail: dodbiba@sys.t.u-tokyo.ac.jp; Nagai, Hiroki; Wang Lipang

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Two pre-treatment methods, prior to leaching of indium from obsolete LCD modules, were described. Black-Right-Pointing-Pointer Conventional grinding and electrical disintegration have been evaluated and compared in the context of LCA. Black-Right-Pointing-Pointer Experimental data on the leaching capacity for indium and the electricity consumption of equipment were inputted into the LCA model in order to compare the environmental performance of each method. Black-Right-Pointing-Pointer An estimate for the environmental performance was calculated as the sum of six impact categories. Black-Right-Pointing-Pointer Electrical disintegration method outperforms conventional grinding in all impact categories. - Abstract: In order to develop an effective recycling systemmore » for obsolete Liquid Crystal Displays (LCDs), which would enable both the leaching of indium (In) and the recovery of a pure glass fraction for recycling, an effective liberation or size-reduction method would be an important pre-treatment step. Therefore, in this study, two different types of liberation methods: (1) conventional grinding, and (2) electrical disintegration have been tested and evaluated in the context of Life Cycle Assessment (LCA). In other words, the above-mentioned methods were compared in order to find out the one that ensures the highest leaching capacity for indium, as well as the lowest environmental burden. One of the main findings of this study was that the electrical disintegration was the most effective liberation method, since it fully liberated the indium containing-layer, ensuring a leaching capacity of 968.5 mg-In/kg-LCD. In turn, the estimate for the environmental burden was approximately five times smaller when compared with the conventional grinding.« less

  17. The Examination of the Classification of Students into Performance Categories by Two Different Equating Methods

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Keller, Robert R.; Parker, Pauline A.

    2011-01-01

    This study investigates the comparability of two item response theory based equating methods: true score equating (TSE), and estimated true equating (ETE). Additionally, six scaling methods were implemented within each equating method: mean-sigma, mean-mean, two versions of fixed common item parameter, Stocking and Lord, and Haebara. Empirical…

  18. Model-based inference for small area estimation with sampling weights

    PubMed Central

    Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.

    2017-01-01

    Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860

  19. Morphological observation and analysis using automated image cytometry for the comparison of trypan blue and fluorescence-based viability detection method.

    PubMed

    Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean

    2015-05-01

    The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.

  20. Comparative study of the ''Misgav Ladach'' and traditional Pfannenstiel surgical techniques for cesarean section.

    PubMed

    Belci, D; Kos, M; Zoricić, D; Kuharić, L; Slivar, A; Begić-Razem, E; Grdinić, I

    2007-06-01

    The aim of this study was to evaluate the advantages of the Misgav Ladach surgical technique compared to traditional cesarean section. A prospective randomized trial of 111 women undergoing cesarean section was carried out in the Pula General Hospital. Forty-nine operations were performed using the Pfannenstiel method of cesarean section, 55 by the Misgav Ladach method and 7 by lower midline laparotomy. It was proved that the cases where the Misgav Ladach method was implemented, compared to the Pfannenstiel method, showed a significantly shorter delivery/extraction and operative time (P=0.0009), the incision pain on the second postoperative day was significantly lower (0.021), we recorded a quicker stand up and walking time (P=0.013), significantly fewer analgesic injections and a shorter duration of analgesia were required (P=0.0009) and the bowel function was restored to normal sooner (P=0.001). The Misgav Ladach method of cesarean section has advantages over the Pfannenstiel method in so far as it is significantly quicker to perform, with diminished postoperative pain and less use of postoperative analgesics. The recovery of physiologic function is faster. No differences were found in intraoperative bleeding, maternal morbidity, scar appearance, uterus postoperative involution and the assessment of the inflammation response to the operative technique.

  1. Usefulness of warm water and oil assistance in colonoscopy by trainees.

    PubMed

    Park, Sung Chul; Keum, Bora; Kim, Eun Sun; Jung, Eun Suk; Lee, Sehe Dong; Park, Sanghoon; Seo, Yeon Seok; Kim, Yong Sik; Jeen, Yoon Tae; Chun, Hoon Jai; Um, Soon Ho; Kim, Chang Duck; Ryu, Ho Sang

    2010-10-01

    Success rate of cecal intubation, endoscopist's difficulty, and procedure-related patient pain are still problems for beginners performing colonoscopy. New methods to aid colonoscopic insertion such as warm water instillation and oil lubrication have been proposed. The aim of this study is to evaluate the feasibility of using warm water or oil in colonoscopy. Colonoscopy was performed in 117 unsedated patients by three endoscopists-in-training. Patients were randomly allocated to three groups, using a conventional method with administration of antispasmodics, warm water instillation, and oil lubrication, respectively. Success rate of total intubation within time limit (15 min), cecal intubation time, degree of endoscopist's difficulty, and level of patient discomfort were compared among the three groups. Cecal intubation time was shorter in the warm water group than in the conventional and oil groups. Degree of procedural difficulty was lower in the warm water group, and patient pain score was higher in the oil lubrication group, compared with the other groups. However, there was no significant difference in success rate of intubation within time limit among the three groups. The warm water method is a simple, safe, and feasible method for beginners. Oil lubrication may not be a useful method compared with conventional and warm water method.

  2. A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos

    PubMed Central

    Wang, Chen; Pun, Thierry; Chanel, Guillaume

    2018-01-01

    Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940

  3. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  4. Comparative evaluation of concrete bridge deck sealers.

    DOT National Transportation Integrated Search

    2015-08-01

    The main objective of this research was to compare the performance of five bridge deck sealer products using a : synthesis of two testing methods: NCHRP Report 244 Series II tests and standards developed by the Alberta : Ministry of Transportation (B...

  5. Training Individuals in Army Units: Comparative Effectiveness of Selected TEC Lessons and Conventional Methods

    DTIC Science & Technology

    1975-12-01

    test for each course was given all participants after the training to measure the comparative effectiveness of each... tested in all five courses ; National Guardsmen were trained and tested in the four infantry courses . For each course , participants were randomly divided...performance and higher than the BL group in the LAW and M16A1 Rifle courses . The performance tests for the

  6. Digital phase-lock loop

    NASA Technical Reports Server (NTRS)

    Thomas, Jr., Jess B. (Inventor)

    1991-01-01

    An improved digital phase lock loop incorporates several distinctive features that attain better performance at high loop gain and better phase accuracy. These features include: phase feedback to a number-controlled oscillator in addition to phase rate; analytical tracking of phase (both integer and fractional cycles); an amplitude-insensitive phase extractor; a more accurate method for extracting measured phase; a method for changing loop gain during a track without loss of lock; and a method for avoiding loss of sampled data during computation delay, while maintaining excellent tracking performance. The advantages of using phase and phase-rate feedback are demonstrated by comparing performance with that of rate-only feedback. Extraction of phase by the method of modeling provides accurate phase measurements even when the number-controlled oscillator phase is discontinuously updated.

  7. A comparative study of new and current methods for dental micro-CT image denoising

    PubMed Central

    Lashgari, Mojtaba; Qin, Jie; Swain, Michael

    2016-01-01

    Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583

  8. A comparison of models for estimating potential evapotranspiration for Florida land cover types

    USGS Publications Warehouse

    Douglas, E.M.; Jacobs, J.M.; Sumner, D.M.; Ray, R.L.

    2009-01-01

    We analyzed observed daily evapotranspiration (DET) at 18 sites having measured DET and ancillary climate data and then used these data to compare the performance of three common methods for estimating potential evapotranspiration (PET): the Turc method (Tc), the Priestley-Taylor method (PT) and the Penman-Monteith method (PM). The sites were distributed throughout the State of Florida and represent a variety of land cover types: open water (3), marshland (4), grassland/pasture (4), citrus (2) and forest (5). Not surprisingly, the highest DET values occurred at the open water sites, ranging from an average of 3.3 mm d-1 in the winter to 5.3 mm d-1 in the spring. DET at the marsh sites was also high, ranging from 2.7 mm d-1 in winter to 4.4 mm d-1 in summer. The lowest DET occurred in the winter and fall seasons at the grass sites (1.3 mm d-1 and 2.0 mm d-1, respectively) and at the forested sites (1.8 mm d-1 and 2.3 mm d-1, respectively). The performance of the three methods when applied to conditions close to PET (Bowen ratio ??? 1) was used to judge relative merit. Under such PET conditions, annually aggregated Tc and PT methods perform comparably and outperform the PM method, possibly due to the sensitivity of the PM method to the limited transferability of previously determined model parameters. At a daily scale, the PT performance appears to be superior to the other two methods for estimating PET for a variety of land covers in Florida. ?? 2009 Elsevier B.V.

  9. Semi-supervised vibration-based classification and condition monitoring of compressors

    NASA Astrophysics Data System (ADS)

    Potočnik, Primož; Govekar, Edvard

    2017-09-01

    Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.

  10. Comparison of alternatives to amplitude thresholding for onset detection of acoustic emission signals

    NASA Astrophysics Data System (ADS)

    Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.

    2017-02-01

    Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.

  11. A Low-Storage-Consumption XML Labeling Method for Efficient Structural Information Extraction

    NASA Astrophysics Data System (ADS)

    Liang, Wenxin; Takahashi, Akihiro; Yokota, Haruo

    Recently, labeling methods to extract and reconstruct the structural information of XML data, which are important for many applications such as XPath query and keyword search, are becoming more attractive. To achieve efficient structural information extraction, in this paper we propose C-DO-VLEI code, a novel update-friendly bit-vector encoding scheme, based on register-length bit operations combining with the properties of Dewey Order numbers, which cannot be implemented in other relevant existing schemes such as ORDPATH. Meanwhile, the proposed method also achieves lower storage consumption because it does not require either prefix schema or any reserved codes for node insertion. We performed experiments to evaluate and compare the performance and storage consumption of the proposed method with those of the ORDPATH method. Experimental results show that the execution times for extracting depth information and parent node labels using the C-DO-VLEI code are about 25% and 15% less, respectively, and the average label size using the C-DO-VLEI code is about 24% smaller, comparing with ORDPATH.

  12. Effects of two educational method of lecturing and role playing on knowledge and performance of high school students in first aid at emergency scene

    PubMed Central

    Hassanzadeh, Akbar; Vasili, Arezu; Zare, Zahra

    2010-01-01

    BACKGROUND: This study aimed to investigate the effects of two educational methods on students' knowledge and performance regarding first aid at emergency scenes. METHODS: In this semi-experimental study, the sample was selected randomly among male and female public high school students of Isfahan. Each group included 60 students. At first the knowledge and performance of students in first aid at emergency scene was assessed using a researcher-made questionnaire. Then necessary education was provided to the students within 10 sessions of two hours by lecturing and role playing. The students' knowledge and performance was as-sessed again and the results were compared. RESULTS: It was no significant relationship between the frequency distribution of students' age, major and knowledge and performance before the educational course in the two groups. The score of knowledge in performing CPR, using proper way to bandage, immobilizing the injured area, and proper ways of carrying injured person after the education was significantly increased in both groups. Moreover, the performance in proper way to bandage, immobilizing injured area and proper ways of carrying injured person after educational course was significantly higher in playing role group compared to lecturing group after education. CONCLUSIONS: Iran is a developing country with a young generation and it is a country with high risk of natural disasters; so, providing necessary education with more effective methods can be effective in reducing mortality and morbidity due to lack of first aid care in crucial moments. Training with playing role is suggested for this purpose. PMID:21589743

  13. Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition

    NASA Astrophysics Data System (ADS)

    Alouges, François; Aussal, Matthieu; Parolin, Emile

    2017-07-01

    This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.

  14. A comparative analysis of clinical outcomes and disposable costs of different catheter ablation methods for the treatment of atrioventricular nodal reentrant tachycardia

    PubMed Central

    Berman, Adam E; Rivner, Harold; Chalkley, Robin; Heboyan, Vahé

    2017-01-01

    Background Catheter ablation of atrioventricular nodal reentrant tachycardia (AVNRT) is a commonly performed electrophysiology (EP) procedure. Few data exist comparing conventional (CONV) versus novel ablation strategies from both clinical and direct cost perspectives. We sought to investigate the disposable costs and clinical outcomes associated with three different ablation methodologies used in the ablation of AVNRT. Methods We performed a retrospective review of AVNRT ablations performed at Augusta University Medical Center from 2006 to 2014. A total of 183 patients were identified. Three different ablation techniques were compared: CONV manual radiofrequency (RF) (n=60), remote magnetic navigation (RMN)-guided RF (n=67), and cryoablation (CRYO) (n=56). Results Baseline demographics did not differ between the three groups except for a higher prevalence of cardiomyopathy in the RMN group (p<0.01). The clinical end point of interest was recurrent AVNRT following the index ablation procedure. A significantly higher number of recurrent AVNRT cases occurred in the CRYO group as compared to CONV and RMN (p=0.003; OR =7.75) groups. Cost-benefit analysis showed both CONV and RMN to be dominant compared to CRYO. Cost-minimization analysis demonstrated the least expensive ablation method to be CONV (mean disposable catheter cost = CONV US$2340; CRYO US$3515; RMN US$5190). Despite comparable clinical outcomes, the incremental cost of RMN over CONV averaged US$3094 per procedure. Conclusion AVNRT ablation using either CONV or RMN techniques is equally effective and associated with lower AVNRT recurrence rates than CRYO. CONV ablation carries significant disposable cost savings as compared to RMN, despite similar efficacy. PMID:29138585

  15. A comparative study on effect of e-learning and instructor-led methods on nurses’ documentation competency

    PubMed Central

    Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas

    2011-01-01

    BACKGROUND: Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses’ skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses’ care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. METHODS: This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). RESULTS: The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses’ documentation competency, it can be a qualified substitute for traditional instructor-led method. CONCLUSIONS: E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs. PMID:22224113

  16. Status and Prospects for Developing Electromagnetic Methods and Facilities for Engineer Reconnaissance in Russia

    NASA Astrophysics Data System (ADS)

    Potekaev, A. I.; Donchenko, V. A.; Zambalov, S. D.; Parvatov, G. N.; Smirnov, I. M.; Svetlichnyi, V. A.; Yakubov, V. P.; Yakovlev, I. A.

    2018-03-01

    An analysis of the most effective methods, techniques and scientific-research developments of induction mine detectors is performed, their comparative tactical-technical characteristics are reported, and priority avenues for further research are outlined.

  17. Determination of Perfluorinated Compounds in the Upper Mississippi River Basin

    EPA Science Inventory

    Despite ongoing efforts to develop robust analytical methods for the determination of perfluorinated compounds (PFCs) such as perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA) in surface water, comparatively little has been published on method performance, and the...

  18. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  19. Using deep learning for detecting gender in adult chest radiographs

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2018-03-01

    In this paper, we present a method for automatically identifying the gender of an imaged person using their frontal chest x-ray images. Our work is motivated by the need to determine missing gender information in some datasets. The proposed method employs the technique of convolutional neural network (CNN) based deep learning and transfer learning to overcome the challenge of developing handcrafted features in limited data. Specifically, the method consists of four main steps: pre-processing, CNN feature extractor, feature selection, and classifier. The method is tested on a combined dataset obtained from several sources with varying acquisition quality resulting in different pre-processing steps that are applied for each. For feature extraction, we tested and compared four CNN architectures, viz., AlexNet, VggNet, GoogLeNet, and ResNet. We applied a feature selection technique, since the feature length is larger than the number of images. Two popular classifiers: SVM and Random Forest, are used and compared. We evaluated the classification performance by cross-validation and used seven performance measures. The best performer is the VggNet-16 feature extractor with the SVM classifier, with accuracy of 86.6% and ROC Area being 0.932 for 5-fold cross validation. We also discuss several misclassified cases and describe future work for performance improvement.

  20. Comparative study of signalling methods for high-speed backplane transceiver

    NASA Astrophysics Data System (ADS)

    Wu, Kejun

    2017-11-01

    A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.

  1. Effect of Curriculum Change on Exam Performance in a 4-Week Psychiatry Clerkship

    ERIC Educational Resources Information Center

    Niedermier, Julie; Way, David; Kasick, David; Kuperschmidt, Rada

    2010-01-01

    Objective: The authors investigated whether curriculum change could produce improved performance, despite a reduction in clerkship length from 8 to 4 weeks. Methods: The exam performance of medical students completing a 4-week clerkship in psychiatry was compared to national data from the National Board of Medical Examiners' Psychiatry Subject…

  2. Strengthening the revenue cycle: a 4-step method for optimizing payment.

    PubMed

    Clark, Jonathan J

    2008-10-01

    Four steps for enhancing the revenue cycle to ensure optimal payment are: *Establish key performance indicator dashboards in each department that compare current with targeted performance; *Create proper organizational structures for each department; *Ensure that high-performing leaders are hired in all management and supervisory positions; *Implement efficient processes in underperforming operations.

  3. Investigating Secondary School Leaders' Perceptions of Performance Management

    ERIC Educational Resources Information Center

    Moreland, Jan

    2009-01-01

    Much of the research into teacher appraisal and performance management has focused on the experience of the classroom teacher. In this article, I will: (1) concentrate on the perspectives of the senior managers in secondary schools; (2) consider their views of the purpose of performance management; (3) compare their methods of implementation of…

  4. Measuring Safety Performance: A Comparison of Whole, Partial, and Momentary Time-Sampling Recording Methods

    ERIC Educational Resources Information Center

    Alvero, Alicia M.; Struss, Kristen; Rappaport, Eva

    2008-01-01

    Partial-interval (PIR), whole-interval (WIR), and momentary time sampling (MTS) estimates were compared against continuous measures of safety performance for three postural behaviors: feet, back, and shoulder position. Twenty-five samples of safety performance across five undergraduate students were scored using a second-by-second continuous…

  5. Performance evaluation of the new hematology analyzer Sysmex XN-series.

    PubMed

    Seo, J Y; Lee, S-T; Kim, S-H

    2015-04-01

    The Sysmex XN-series is a new automated hematology analyzer designed to improve the accuracy of cell counts and the specificity of the flagging events. The basic characteristics and the performance of new measurement channels of the XN were evaluated and compared with the Sysmex XE-2100 and the manual method. Fluorescent platelet count (PLT-F) was compared with the flow cytometric method. The low WBC mode and body fluid mode were also evaluated. For workflow analysis, 1005 samples were analyzed on both the XN and the XE-2100, and manual review rates were compared. All parameters measured by the XN correlated well with the XE-2100. PLT-F showed better correlation with the flow cytometric method (r(2)  = 0.80) compared with optical platelet count (r(2)  = 0.73) for platelet counts <70 × 10(9) /L. The low WBC mode reported accurate leukocyte differentials for samples with a WBC count <0.5 × 10(9) /L. Relatively good correlation was found for WBC counts between the manual method and the body fluid mode (r = 0.88). The XN made less flags than the XE-2100, while the sensitivities of both instruments were comparable. The XN provided reliable results on low cell counts, as well as reduced manual blood film reviews, while maintaining a proper level of diagnostic sensitivity. © 2014 John Wiley & Sons Ltd.

  6. Performance Evaluation of Public Non-Profit Hospitals Using a BP Artificial Neural Network: The Case of Hubei Province in China

    PubMed Central

    Li, Chunhui; Yu, Chuanhua

    2013-01-01

    To provide a reference for evaluating public non-profit hospitals in the new environment of medical reform, we established a performance evaluation system for public non-profit hospitals. The new “input-output” performance model for public non-profit hospitals is based on four primary indexes (input, process, output and effect) that include 11 sub-indexes and 41 items. The indicator weights were determined using the analytic hierarchy process (AHP) and entropy weight method. The BP neural network was applied to evaluate the performance of 14 level-3 public non-profit hospitals located in Hubei Province. The most stable BP neural network was produced by comparing different numbers of neurons in the hidden layer and using the “Leave-one-out” Cross Validation method. The performance evaluation system we established for public non-profit hospitals could reflect the basic goal of the new medical health system reform in China. Compared with PLSR, the result indicated that the BP neural network could be used effectively for evaluating the performance public non-profit hospitals. PMID:23955238

  7. Estimation and optimization of thermal performance of evacuated tube solar collector system

    NASA Astrophysics Data System (ADS)

    Dikmen, Erkan; Ayaz, Mahir; Ezen, H. Hüseyin; Küçüksille, Ecir U.; Şahin, Arzu Şencan

    2014-05-01

    In this study, artificial neural networks (ANNs) and adaptive neuro-fuzzy (ANFIS) in order to predict the thermal performance of evacuated tube solar collector system have been used. The experimental data for the training and testing of the networks were used. The results of ANN are compared with ANFIS in which the same data sets are used. The R2-value for the thermal performance values of collector is 0.811914 which can be considered as satisfactory. The results obtained when unknown data were presented to the networks are satisfactory and indicate that the proposed method can successfully be used for the prediction of the thermal performance of evacuated tube solar collectors. In addition, new formulations obtained from ANN are presented for the calculation of the thermal performance. The advantages of this approaches compared to the conventional methods are speed, simplicity, and the capacity of the network to learn from examples. In addition, genetic algorithm (GA) was used to maximize the thermal performance of the system. The optimum working conditions of the system were determined by the GA.

  8. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659

  9. The clustering-based case-based reasoning for imbalanced business failure prediction: a hybrid approach through integrating unsupervised process with supervised process

    NASA Astrophysics Data System (ADS)

    Li, Hui; Yu, Jun-Ling; Yu, Le-An; Sun, Jie

    2014-05-01

    Case-based reasoning (CBR) is one of the main forecasting methods in business forecasting, which performs well in prediction and holds the ability of giving explanations for the results. In business failure prediction (BFP), the number of failed enterprises is relatively small, compared with the number of non-failed ones. However, the loss is huge when an enterprise fails. Therefore, it is necessary to develop methods (trained on imbalanced samples) which forecast well for this small proportion of failed enterprises and performs accurately on total accuracy meanwhile. Commonly used methods constructed on the assumption of balanced samples do not perform well in predicting minority samples on imbalanced samples consisting of the minority/failed enterprises and the majority/non-failed ones. This article develops a new method called clustering-based CBR (CBCBR), which integrates clustering analysis, an unsupervised process, with CBR, a supervised process, to enhance the efficiency of retrieving information from both minority and majority in CBR. In CBCBR, various case classes are firstly generated through hierarchical clustering inside stored experienced cases, and class centres are calculated out by integrating cases information in the same clustered class. When predicting the label of a target case, its nearest clustered case class is firstly retrieved by ranking similarities between the target case and each clustered case class centre. Then, nearest neighbours of the target case in the determined clustered case class are retrieved. Finally, labels of the nearest experienced cases are used in prediction. In the empirical experiment with two imbalanced samples from China, the performance of CBCBR was compared with the classical CBR, a support vector machine, a logistic regression and a multi-variant discriminate analysis. The results show that compared with the other four methods, CBCBR performed significantly better in terms of sensitivity for identifying the minority samples and generated high total accuracy meanwhile. The proposed approach makes CBR useful in imbalanced forecasting.

  10. SU-D-206-01: Employing a Novel Consensus Optimization Strategy to Achieve Iterative Cone Beam CT Reconstruction On a Multi-GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z

    Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less

  11. Automated sub-cortical brain structure segmentation combining spatial and deep convolutional features.

    PubMed

    Kushibar, Kaisar; Valverde, Sergi; González-Villà, Sandra; Bernal, Jose; Cabezas, Mariano; Oliver, Arnau; Lladó, Xavier

    2018-06-15

    Sub-cortical brain structure segmentation in Magnetic Resonance Images (MRI) has attracted the interest of the research community for a long time as morphological changes in these structures are related to different neurodegenerative disorders. However, manual segmentation of these structures can be tedious and prone to variability, highlighting the need for robust automated segmentation methods. In this paper, we present a novel convolutional neural network based approach for accurate segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the automated segmentation, we propose to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. We evaluate the accuracy of the proposed method on the public MICCAI 2012 challenge and IBSR 18 datasets, comparing it with different traditional and deep learning state-of-the-art methods. On the MICCAI 2012 dataset, our method shows an excellent performance comparable to the best participant strategy on the challenge, while performing significantly better than state-of-the-art techniques such as FreeSurfer and FIRST. On the IBSR 18 dataset, our method also exhibits a significant increase in the performance with respect to not only FreeSurfer and FIRST, but also comparable or better results than other recent deep learning approaches. Moreover, our experiments show that both the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method. In order to encourage the reproducibility and the use of the proposed method, a public version of our approach is available to download for the neuroimaging community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  12. A Spiking Neural Network Methodology and System for Learning and Comparative Analysis of EEG Data From Healthy Versus Addiction Treated Versus Addiction Not Treated Subjects.

    PubMed

    Doborjeh, Maryam Gholami; Wang, Grace Y; Kasabov, Nikola K; Kydd, Robert; Russell, Bruce

    2016-09-01

    This paper introduces a method utilizing spiking neural networks (SNN) for learning, classification, and comparative analysis of brain data. As a case study, the method was applied to electroencephalography (EEG) data collected during a GO/NOGO cognitive task performed by untreated opiate addicts, those undergoing methadone maintenance treatment (MMT) for opiate dependence and a healthy control group. the method is based on an SNN architecture called NeuCube, trained on spatiotemporal EEG data. NeuCube was used to classify EEG data across subject groups and across GO versus NOGO trials, but also facilitated a deeper comparative analysis of the dynamic brain processes. This analysis results in a better understanding of human brain functioning across subject groups when performing a cognitive task. In terms of the EEG data classification, a NeuCube model obtained better results (the maximum obtained accuracy: 90.91%) when compared with traditional statistical and artificial intelligence methods (the maximum obtained accuracy: 50.55%). more importantly, new information about the effects of MMT on cognitive brain functions is revealed through the analysis of the SNN model connectivity and its dynamics. this paper presented a new method for EEG data modeling and revealed new knowledge on brain functions associated with mental activity which is different from the brain activity observed in a resting state of the same subjects.

  13. Quality control for quantitative PCR based on amplification compatibility test.

    PubMed

    Tichopad, Ales; Bar, Tzachi; Pecen, Ladislav; Kitchen, Robert R; Kubista, Mikael; Pfaffl, Michael W

    2010-04-01

    Quantitative qPCR is a routinely used method for the accurate quantification of nucleic acids. Yet it may generate erroneous results if the amplification process is obscured by inhibition or generation of aberrant side-products such as primer dimers. Several methods have been established to control for pre-processing performance that rely on the introduction of a co-amplified reference sequence, however there is currently no method to allow for reliable control of the amplification process without directly modifying the sample mix. Herein we present a statistical approach based on multivariate analysis of the amplification response data generated in real-time. The amplification trajectory in its most resolved and dynamic phase is fitted with a suitable model. Two parameters of this model, related to amplification efficiency, are then used for calculation of the Z-score statistics. Each studied sample is compared to a predefined reference set of reactions, typically calibration reactions. A probabilistic decision for each individual Z-score is then used to identify the majority of inhibited reactions in our experiments. We compare this approach to univariate methods using only the sample specific amplification efficiency as reporter of the compatibility. We demonstrate improved identification performance using the multivariate approach compared to the univariate approach. Finally we stress that the performance of the amplification compatibility test as a quality control procedure depends on the quality of the reference set. Copyright 2010 Elsevier Inc. All rights reserved.

  14. A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm

    NASA Astrophysics Data System (ADS)

    Auroux, D.; Blum, J.

    2008-03-01

    This paper deals with a new data assimilation algorithm, called Back and Forth Nudging. The standard nudging technique consists in adding to the equations of the model a relaxation term that is supposed to force the observations to the model. The BFN algorithm consists in repeatedly performing forward and backward integrations of the model with relaxation (or nudging) terms, using opposite signs in the direct and inverse integrations, so as to make the backward evolution numerically stable. This algorithm has first been tested on the standard Lorenz model with discrete observations (perfect or noisy) and compared with the variational assimilation method. The same type of study has then been performed on the viscous Burgers equation, comparing again with the variational method and focusing on the time evolution of the reconstruction error, i.e. the difference between the reference trajectory and the identified one over a time period composed of an assimilation period followed by a prediction period. The possible use of the BFN algorithm as an initialization for the variational method has also been investigated. Finally the algorithm has been tested on a layered quasi-geostrophic model with sea-surface height observations. The behaviours of the two algorithms have been compared in the presence of perfect or noisy observations, and also for imperfect models. This has allowed us to reach a conclusion concerning the relative performances of the two algorithms.

  15. Application of 3D Zernike descriptors to shape-based ligand similarity searching.

    PubMed

    Venkatraman, Vishwesh; Chakravarthy, Padmasini Ramji; Kihara, Daisuke

    2009-12-17

    The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD) for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR) and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  16. Application of 3D Zernike descriptors to shape-based ligand similarity searching

    PubMed Central

    2009-01-01

    Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD) for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR) and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD. PMID:20150998

  17. Reliability of functional and predictive methods to estimate the hip joint centre in human motion analysis in healthy adults.

    PubMed

    Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P

    2017-03-01

    In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Novel edge treatment method for improving the transmission reconstruction quality in Tomographic Gamma Scanning.

    PubMed

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2018-05-01

    Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Comparative evaluation of two quantitative test methods for determining the efficacy of liquid sporicides and sterilants on a hard surface: a precollaborative study.

    PubMed

    Tomasino, Stephen F; Hamilton, Martin A

    2007-01-01

    Two quantitative carrier-based test methods for determining the efficacy of liquid sporicides and sterilants on a hard surface, the Standard Quantitative Carrier Test Method-ASTM E 2111-00 and an adaptation of a quantitative micro-method as reported by Sagripanti and Bonifacino, were compared in this study. The methods were selected based on their desirable characteristics (e.g., well-developed protocol, previous use with spores, fully quantitative, and use of readily available equipment) for testing liquid sporicides and sterilants on a hard surface. In this paper, the Sagripanti-Bonifacino procedure is referred to as the Three Step Method (TSM). AOAC Official Method 966.04 was included in this study as a reference method. Three laboratories participated in the evaluation. Three chemical treatments were tested: (1) 3000 ppm sodium hypochlorite with pH adjusted to 7.0, (2) a hydrogen peroxide/peroxyacetic acid product, and (3) 3000 ppm sodium hypochlorite with pH unadjusted (pH of approximately 10.0). A fourth treatment, 6000 ppm sodium hypochlorite solution with pH adjusted to 7.0, was included only for Method 966.04 as a positive control (high level of efficacy). The contact time was 10 min for all chemical treatments except the 6000 ppm sodium hypochlorite treatment which was tested at 30 min. Each chemical treatment was tested 3 times using each of the methods. Only 2 of the laboratories performed the AOAC method. Method performance was assessed by the within-laboratory variance, between-laboratory variance, and total variance associated with the log reduction (LR) estimates generated by each quantitative method. The quantitative methods performed similarly, and the LR values generated by each method were not statistically different for the 3 treatments evaluated. Based on feedback from the participating laboratories, compared to the TSM, ASTM E 2111-00 was more resource demanding and required more set-up time. The logistical and resource concerns identified for ASTM E 2111-00 were largely associated with the filtration process and counting bacterial colonies on filters. Thus, the TSM was determined to be the most suitable method.

  20. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    NASA Astrophysics Data System (ADS)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  1. Development of a test protocol for evaluating EVA glove performance

    NASA Technical Reports Server (NTRS)

    Hinman, Elaine M.

    1992-01-01

    Testing gloved hand performance involves work from several disciplines. Evaluations performed in the course of reenabling a disabled hand, designing a robotic end effector or master controller, or hard-suit design have all yielded relevant information, and, in most cases, produced performance test methods. Most times, these test methods have been primarily oriented toward their parent discipline. For space operations, a comparative test which would provide a way to quantify pressure glove and end effector performance would be useful in dividing tasks between humans and robots. Such a test would have to rely heavily on sensored measurement, as opposed to questionnaires, to produce relevant data. However, at some point human preference would have to be taken into account. This paper presents a methodology for evaluating gloved hand performance which attempts to respond to these issues. Glove testing of a prototype glove design using this method is described.

  2. Prioritization of candidate disease genes by topological similarity between disease and protein diffusion profiles.

    PubMed

    Zhu, Jie; Qin, Yufang; Liu, Taigang; Wang, Jun; Zheng, Xiaoqi

    2013-01-01

    Identification of gene-phenotype relationships is a fundamental challenge in human health clinic. Based on the observation that genes causing the same or similar phenotypes tend to correlate with each other in the protein-protein interaction network, a lot of network-based approaches were proposed based on different underlying models. A recent comparative study showed that diffusion-based methods achieve the state-of-the-art predictive performance. In this paper, a new diffusion-based method was proposed to prioritize candidate disease genes. Diffusion profile of a disease was defined as the stationary distribution of candidate genes given a random walk with restart where similarities between phenotypes are incorporated. Then, candidate disease genes are prioritized by comparing their diffusion profiles with that of the disease. Finally, the effectiveness of our method was demonstrated through the leave-one-out cross-validation against control genes from artificial linkage intervals and randomly chosen genes. Comparative study showed that our method achieves improved performance compared to some classical diffusion-based methods. To further illustrate our method, we used our algorithm to predict new causing genes of 16 multifactorial diseases including Prostate cancer and Alzheimer's disease, and the top predictions were in good consistent with literature reports. Our study indicates that integration of multiple information sources, especially the phenotype similarity profile data, and introduction of global similarity measure between disease and gene diffusion profiles are helpful for prioritizing candidate disease genes. Programs and data are available upon request.

  3. Comparison of the Cellient(™) automated cell block system and agar cell block method.

    PubMed

    Kruger, A M; Stevens, M W; Kerley, K J; Carter, C D

    2014-12-01

    To compare the Cellient(TM) automated cell block system with the agar cell block method in terms of quantity and quality of diagnostic material and morphological, histochemical and immunocytochemical features. Cell blocks were prepared from 100 effusion samples using the agar method and Cellient system, and routinely sectioned and stained for haematoxylin and eosin and periodic acid-Schiff with diastase (PASD). A preliminary immunocytochemical study was performed on selected cases (27/100 cases). Sections were evaluated using a three-point grading system to compare a set of morphological parameters. Statistical analysis was performed using Fisher's exact test. Parameters assessing cellularity, presence of single cells and definition of nuclear membrane, nucleoli, chromatin and cytoplasm showed a statistically significant improvement on Cellient cell blocks compared with agar cell blocks (P < 0.05). No significant difference was seen for definition of cell groups, PASD staining or the intensity or clarity of immunocytochemical staining. A discrepant immunocytochemistry (ICC) result was seen in 21% (13/63) of immunostains. The Cellient technique is comparable with the agar method, with statistically significant results achieved for important morphological features. It demonstrates potential as an alternative cell block preparation method which is relevant for the rapid processing of fine needle aspiration samples, malignant effusions and low-cellularity specimens, where optimal cell morphology and architecture are essential. Further investigation is required to optimize immunocytochemical staining using the Cellient method. © 2014 John Wiley & Sons Ltd.

  4. A systematic evaluation of normalization methods in quantitative label-free proteomics.

    PubMed

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2018-01-01

    To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.

  5. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  6. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  7. Hole filling with oriented sticks in ultrasound volume reconstruction

    PubMed Central

    Vaughan, Thomas; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2015-01-01

    Abstract. Volumes reconstructed from tracked planar ultrasound images often contain regions where no information was recorded. Existing interpolation methods introduce image artifacts and tend to be slow in filling large missing regions. Our goal was to develop a computationally efficient method that fills missing regions while adequately preserving image features. We use directional sticks to interpolate between pairs of known opposing voxels in nearby images. We tested our method on 30 volumetric ultrasound scans acquired from human subjects, and compared its performance to that of other published hole-filling methods. Reconstruction accuracy, fidelity, and time were improved compared with other methods. PMID:26839907

  8. Steel Rack Connections: Identification of Most Influential Factors and a Comparison of Stiffness Design Methods

    PubMed Central

    Shah, S. N. R.; Sulong, N. H. Ramli; Shariati, Mahdi; Jumaat, M. Z.

    2015-01-01

    Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods. PMID:26452047

  9. [Household food security: comparing an alternative method to a classical one].

    PubMed

    Herrán, Oscar F; Quintero, Doris C; Prada, Gloria E

    2010-08-01

    Establishing the performance of the US Environmental Protection Agency (EPA) household food security scale (EPSA) which is being used in Latin-America and the Caribbean, compared to a traditionally-used method (food insecurity scale) which has led to establishing food security at individual and population level. The performance of the household food security scale (EPSA) was evaluated during 2007-2008 and compared to that of the food insecurity (FI) scale based on the energy usually consumed. Two hundred and eleven household participated in the study. The person responsible for preparing food in the home answered the EPSA questionnaire. Another household member filled in a form recording the last twenty-four hours' household consumption (R24H) (on two different occasions). The study was validated by food insecurity from R24H and supposed food security from the EPSA questionnaire. Food insecurity by R24H was 48.8 % and 19.4 % on the EPSA. The EPSA had 16.5 % sensitivity and 77.8 % specificity. Agreement between both methods according to Cohen's Kappa was -0.06 (-0.20-0.03 CI). Assuming equivalence of methods, the EPSA greatly underestimated household food insecurity. The EPSA results compared to those arising from the R24H were not very coherent. Some implications are discussed regarding related public policy.

  10. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  11. A comparative study on effect of e-learning and instructor-led methods on nurses' documentation competency.

    PubMed

    Abbaszadeh, Abbas; Sabeghi, Hakimeh; Borhani, Fariba; Heydari, Abbas

    2011-01-01

    Accurate recording of the nursing care indicates the care performance and its quality, so that, any failure in documentation can be a reason for inadequate patient care. Therefore, improving nurses' skills in this field using effective educational methods is of high importance. Since traditional teaching methods are not suitable for communities with rapid knowledge expansion and constant changes, e-learning methods can be a viable alternative. To show the importance of e-learning methods on nurses' care reporting skills, this study was performed to compare the e-learning methods with the traditional instructor-led methods. This was a quasi-experimental study aimed to compare the effect of two teaching methods (e-learning and lecture) on nursing documentation and examine the differences in acquiring competency on documentation between nurses who participated in the e-learning (n = 30) and nurses in a lecture group (n = 31). The results of the present study indicated that statistically there was no significant difference between the two groups. The findings also revealed that statistically there was no significant correlation between the two groups toward demographic variables. However, we believe that due to benefits of e-learning against traditional instructor-led method, and according to their equal effect on nurses' documentation competency, it can be a qualified substitute for traditional instructor-led method. E-learning as a student-centered method as well as lecture method equally promote competency of the nurses on documentation. Therefore, e-learning can be used to facilitate the implementation of nursing educational programs.

  12. Single-pass memory system evaluation for multiprogramming workloads

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1990-01-01

    Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.

  13. Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method

    NASA Astrophysics Data System (ADS)

    Yuan, Zhe; Zhang, Yiming; Zheng, Qijia

    2018-02-01

    An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.

  14. Communicating Patient Status: Comparison of Teaching Strategies in Prelicensure Nursing Education.

    PubMed

    Lanz, Amelia S; Wood, Felecia G

    Research indicates that nurses lack adequate preparation for reporting patient status. This study compared 2 instructional methods focused on patient status reporting in the clinical setting using a randomized posttest-only comparison group design. Reporting performance using a standardized communication framework and student perceptions of satisfaction and confidence with learning were measured in a simulated event that followed the instruction. Between the instructional methods, there was no statistical difference in student reporting performance or perceptions of learning. Performance evaluations provided helpful insights for the nurse educator.

  15. Comparison between measured turbine stage performance and the predicted performance using quasi-3D flow and boundary layer analyses

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.; Haas, J. E.; Katsanis, T.

    1984-01-01

    A method for calculating turbine stage performance is described. The usefulness of the method is demonstrated by comparing measured and predicted efficiencies for nine different stages. Comparisons are made over a range of turbine pressure ratios and rotor speeds. A quasi-3D flow analysis is used to account for complex passage geometries. Boundary layer analyses are done to account for losses due to friction. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses.

  16. Assessment of gene order computing methods for Alzheimer's disease

    PubMed Central

    2013-01-01

    Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541

  17. Comparison of methods for the detection of gravitational waves from unknown neutron stars

    NASA Astrophysics Data System (ADS)

    Walsh, S.; Pitkin, M.; Oliver, M.; D'Antonio, S.; Dergachev, V.; Królak, A.; Astone, P.; Bejger, M.; Di Giovanni, M.; Dorosh, O.; Frasca, S.; Leaci, P.; Mastrogiovanni, S.; Miller, A.; Palomba, C.; Papa, M. A.; Piccinni, O. J.; Riles, K.; Sauter, O.; Sintes, A. M.

    2016-12-01

    Rapidly rotating neutron stars are promising sources of continuous gravitational wave radiation for the LIGO and Virgo interferometers. The majority of neutron stars in our galaxy have not been identified with electromagnetic observations. All-sky searches for isolated neutron stars offer the potential to detect gravitational waves from these unidentified sources. The parameter space of these blind all-sky searches, which also cover a large range of frequencies and frequency derivatives, presents a significant computational challenge. Different methods have been designed to perform these searches within acceptable computational limits. Here we describe the first benchmark in a project to compare the search methods currently available for the detection of unknown isolated neutron stars. The five methods compared here are individually referred to as the PowerFlux, sky Hough, frequency Hough, Einstein@Home, and time domain F -statistic methods. We employ a mock data challenge to compare the ability of each search method to recover signals simulated assuming a standard signal model. We find similar performance among the four quick-look search methods, while the more computationally intensive search method, Einstein@Home, achieves up to a factor of two higher sensitivity. We find that the absence of a second derivative frequency in the search parameter space does not degrade search sensitivity for signals with physically plausible second derivative frequencies. We also report on the parameter estimation accuracy of each search method, and the stability of the sensitivity in frequency and frequency derivative and in the presence of detector noise.

  18. Comparison of Submental Blood Collection with the Retroorbital and Submandibular Methods in Mice (Mus musculus)

    PubMed Central

    Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L

    2016-01-01

    Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712

  19. Comparative study of some robust statistical methods: weighted, parametric, and nonparametric linear regression of HPLC convoluted peak responses using internal standard method in drug bioavailability studies.

    PubMed

    Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Ragab, Marwa A A

    2013-05-01

    This manuscript discusses the application and the comparison between three statistical regression methods for handling data: parametric, nonparametric, and weighted regression (WR). These data were obtained from different chemometric methods applied to the high-performance liquid chromatography response data using the internal standard method. This was performed on a model drug Acyclovir which was analyzed in human plasma with the use of ganciclovir as internal standard. In vivo study was also performed. Derivative treatment of chromatographic response ratio data was followed by convolution of the resulting derivative curves using 8-points sin x i polynomials (discrete Fourier functions). This work studies and also compares the application of WR method and Theil's method, a nonparametric regression (NPR) method with the least squares parametric regression (LSPR) method, which is considered the de facto standard method used for regression. When the assumption of homoscedasticity is not met for analytical data, a simple and effective way to counteract the great influence of the high concentrations on the fitted regression line is to use WR method. WR was found to be superior to the method of LSPR as the former assumes that the y-direction error in the calibration curve will increase as x increases. Theil's NPR method was also found to be superior to the method of LSPR as the former assumes that errors could occur in both x- and y-directions and that might not be normally distributed. Most of the results showed a significant improvement in the precision and accuracy on applying WR and NPR methods relative to LSPR.

  20. Segmentation of MR images via discriminative dictionary learning and sparse coding: application to hippocampus labeling.

    PubMed

    Tong, Tong; Wolz, Robin; Coupé, Pierrick; Hajnal, Joseph V; Rueckert, Daniel

    2013-08-01

    We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labeling approaches that rely on comparing image similarities between atlases and target images. In addition, we propose a Fixed Discriminative Dictionary Learning for Segmentation (F-DDLS) strategy, which can learn dictionaries offline and perform segmentations online, enabling a significant speed-up in the segmentation stage. The proposed method has been evaluated for the hippocampus segmentation of 80 healthy ICBM subjects and 202 ADNI images. The robustness of the proposed method, especially of our F-DDLS strategy, was validated by training and testing on different subject groups in the ADNI database. The influence of different parameters was studied and the performance of the proposed method was also compared with that of the nonlocal patch-based approach. The proposed method achieved a median Dice coefficient of 0.879 on 202 ADNI images and 0.890 on 80 ICBM subjects, which is competitive compared with state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Wavelet-Based Artifact Identification and Separation Technique for EEG Signals during Galvanic Vestibular Stimulation

    PubMed Central

    Adib, Mani; Cretu, Edmond

    2013-01-01

    We present a new method for removing artifacts in electroencephalography (EEG) records during Galvanic Vestibular Stimulation (GVS). The main challenge in exploiting GVS is to understand how the stimulus acts as an input to brain. We used EEG to monitor the brain and elicit the GVS reflexes. However, GVS current distribution throughout the scalp generates an artifact on EEG signals. We need to eliminate this artifact to be able to analyze the EEG signals during GVS. We propose a novel method to estimate the contribution of the GVS current in the EEG signals at each electrode by combining time-series regression methods with wavelet decomposition methods. We use wavelet transform to project the recorded EEG signal into various frequency bands and then estimate the GVS current distribution in each frequency band. The proposed method was optimized using simulated signals, and its performance was compared to well-accepted artifact removal methods such as ICA-based methods and adaptive filters. The results show that the proposed method has better performance in removing GVS artifacts, compared to the others. Using the proposed method, a higher signal to artifact ratio of −1.625 dB was achieved, which outperformed other methods such as ICA-based methods, regression methods, and adaptive filters. PMID:23956786

  2. The Stock Performance of C. Everett Koop Award Winners Compared With the Standard & Poor's 500 Index

    PubMed Central

    Goetzel, Ron Z.; Fabius, Raymond; Fabius, Dan; Roemer, Enid C.; Thornton, Nicole; Kelly, Rebecca K.; Pelletier, Kenneth R.

    2016-01-01

    Objective: To explore the link between companies investing in the health and well-being programs of their employees and stock market performance. Methods: Stock performance of C. Everett Koop National Health Award winners (n = 26) was measured over time and compared with the average performance of companies comprising the Standard and Poor's (S&P) 500 Index. Results: The Koop Award portfolio outperformed the S&P 500 Index. In the 14-year period tracked (2000–2014), Koop Award winners’ stock values appreciated by 325% compared with the market average appreciation of 105%. Conclusions: This study supports prior and ongoing research demonstrating a higher market valuation—an affirmation of business success by Wall Street investors—of socially responsible companies that invest in the health and well-being of their workers when compared with other publicly traded firms. PMID:26716843

  3. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel

    2012-06-01

    The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.

  4. Intubation Methods by Novice Intubators in a Manikin Model

    PubMed Central

    O'Carroll, Darragh C; Aratani, Ashley K; Lee, Dane C; Lau, Christopher A; Morton, Paul N; Yamamoto, Loren G; Berg, Benjamin W

    2013-01-01

    Tracheal Intubation is an important yet difficult skill to learn with many possible methods and techniques. Direct laryngoscopy is the standard method of tracheal intubation, but several instruments have been shown to be less difficult and have better performance characteristics than the traditional direct method. We compared 4 different intubation methods performed by novice intubators on manikins: conventional direct laryngoscopy, video laryngoscopy, Airtraq® laryngoscopy, and fiberoptic laryngoscopy. In addition, we attempted to find a correlation between playing videogames and intubation times in novice intubators. Video laryngoscopy had the best results for both our normal and difficult airway (cervical spine immobilization) manikin scenarios. When video was compared to direct in the normal airway scenario, it had a significantly higher success rate (100% vs 83% P=.02) and shorter intubation times (29.1±27.4 sec vs 45.9±39.5 sec, P=.03). In the difficult airway scenario video laryngoscopy maintained a significantly higher success rate (91% vs 71% P=0.04) and likelihood of success (3.2±1.0 95%CI [2.9–3.5] vs 2.4±0.9 95%CI [2.1–2.7]) when compared to direct laryngoscopy. Participants also reported significantly higher rates of self-confidence (3.5±0.6 95%CI [3.3–3.7]) and ease of use (1.5±0.7 95%CI [1.3–1.8]) with video laryngoscopy compared to all other methods. We found no correlation between videogame playing and intubation methods. PMID:24167768

  5. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  6. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  7. Performance analysis of unsupervised optimal fuzzy clustering algorithm for MRI brain tumor segmentation.

    PubMed

    Blessy, S A Praylin Selva; Sulochana, C Helen

    2015-01-01

    Segmentation of brain tumor from Magnetic Resonance Imaging (MRI) becomes very complicated due to the structural complexities of human brain and the presence of intensity inhomogeneities. To propose a method that effectively segments brain tumor from MR images and to evaluate the performance of unsupervised optimal fuzzy clustering (UOFC) algorithm for segmentation of brain tumor from MR images. Segmentation is done by preprocessing the MR image to standardize intensity inhomogeneities followed by feature extraction, feature fusion and clustering. Different validation measures are used to evaluate the performance of the proposed method using different clustering algorithms. The proposed method using UOFC algorithm produces high sensitivity (96%) and low specificity (4%) compared to other clustering methods. Validation results clearly show that the proposed method with UOFC algorithm effectively segments brain tumor from MR images.

  8. Comparative Study of Fault Diagnostic Methods in Voltage Source Inverter Fed Three Phase Induction Motor Drive

    NASA Astrophysics Data System (ADS)

    Dhumale, R. B.; Lokhande, S. D.

    2017-05-01

    Three phase Pulse Width Modulation inverter plays vital role in industrial applications. The performance of inverter demeans as several types of faults take place in it. The widely used switching devices in power electronics are Insulated Gate Bipolar Transistors (IGBTs) and Metal Oxide Field Effect Transistors (MOSFET). The IGBTs faults are broadly classified as base or collector open circuit fault, misfiring fault and short circuit fault. To develop consistency and performance of inverter, knowledge of fault mode is extremely important. This paper presents the comparative study of IGBTs fault diagnosis. Experimental set up is implemented for data acquisition under various faulty and healthy conditions. Recent methods are executed using MATLAB-Simulink and compared using key parameters like average accuracy, fault detection time, implementation efforts, threshold dependency, and detection parameter, resistivity against noise and load dependency.

  9. Analysis of the Interaction of Student Characteristics with Method in Micro-Teaching.

    ERIC Educational Resources Information Center

    Chavers, Katherine; And Others

    A study examined the comparative effects on microteaching performance of (1) eight different methods of teacher training and (2) the interaction of method with student characteristics. Subjects, 71 enrollees in an educational psychology course, were randomly assigned to eight treatment groups (including one control group). Treatments consisted of…

  10. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent…

  11. Intervention randomized controlled trials involving wrist and shoulder arthroscopy: a systematic review

    PubMed Central

    2014-01-01

    Background Although arthroscopy of upper extremity joints was initially a diagnostic tool, it is increasingly used for therapeutic interventions. Randomized controlled trials (RCTs) are considered the gold standard for assessing treatment efficacy. We aimed to review the literature for intervention RCTs involving wrist and shoulder arthroscopy. Methods We performed a systematic review for RCTs in which at least one arm was an intervention performed through wrist arthroscopy or shoulder arthroscopy. PubMed and Cochrane Library databases were searched up to December 2012. Two researchers reviewed each article and recorded the condition treated, randomization method, number of randomized participants, time of randomization, outcomes measures, blinding, and description of dropouts and withdrawals. We used the modified Jadad scale that considers the randomization method, blinding, and dropouts/withdrawals; score 0 (lowest quality) to 5 (highest quality). The scores for the wrist and shoulder RCTs were compared with the Mann–Whitney test. Results The first references to both wrist and shoulder arthroscopy appeared in the late 1970s. The search found 4 wrist arthroscopy intervention RCTs (Kienböck’s disease, dorsal wrist ganglia, volar wrist ganglia, and distal radius fracture; first 3 compared arthroscopic with open surgery). The median number of participants was 45. The search found 50 shoulder arthroscopy intervention RCTs (rotator cuff tears 22, instability 14, impingement 9, and other conditions 5). Of these, 31 compared different arthroscopic treatments, 12 compared arthroscopic with open treatment, and 7 compared arthroscopic with nonoperative treatment. The median number of participants was 60. The median modified Jadad score for the wrist RCTs was 0.5 (range 0–1) and for the shoulder RCTs 3.0 (range 0–5) (p = 0.012). Conclusion Despite the increasing use of wrist arthroscopy in the treatment of various wrist disorders the efficacy of arthroscopically performed wrist interventions has been studied in only 4 randomized studies compared to 50 randomized studies of significantly higher quality assessing interventions performed through shoulder arthroscopy. PMID:25059881

  12. Design and Implementation of Scientific Software Components to Enable Multiscale Modeling: The Effective Fragment Potential (QM/EFP) Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha

    2012-10-19

    The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less

  13. Treating nailbiting: a comparative analysis of mild aversion and competing response therapies.

    PubMed

    Silber, K P; Haynes, C E

    1992-01-01

    This study compared two methods of treating nail-biting. One method involved the use of a mild aversive stimulus in which subjects painted a bitter substance on their nails, and the other required the subject to perform a competing response whenever they had the urge to bite or found themselves biting their nails. Both methods included self-monitoring of the behaviour, and a third group of subjects performed self-monitoring alone as a control condition. The study lasted four weeks. Twenty-one subjects, seven per group, participated. Both methods resulted in significant improvements in nail length, with the competing response method showing the most beneficial effect. There was no significant improvement for the control group. The competing response condition also yielded significant improvements along other dimensions such as degree of skin damage and subjects own ratings of their control over their habit. These were not seen for the other two conditions. The benefits of this abridged version of Azrin and Nunn's (Behaviour Research and Therapy, 11, 619-628, 1973) habit reversal method in terms of treatment success, use of therapist time and client satisfaction, are discussed.

  14. Achilles tenotomy as an office procedure and current practising trends among New Zealand orthopaedic surgeons.

    PubMed

    Agius, Lewis; Wickham, Angus; Walker, Cameron; Knudsen, Joshua

    2018-05-18

    Percutaneous Achilles tenotomy (PAT) is performed during the final phase of casting with Ponseti method. Several settings have been proposed as venues for this procedure, however it is increasingly being performed in theatre under a general anaesthetic (GA). General anaesthesia, however, is expensive and not without risks. The purpose of the present study was to compare results of outpatient releases to theatre releases, and assess current practising trends among orthopaedic surgeons. Retrospective comparison of patients with idiopathic clubfoot managed by Ponseti method who had Achilles tenotomy performed in outpatient clinic and in theatre. Surveys were sent to all POSNZ members to determine current practising trends in New Zealand. Parental satisfaction surveys were performed. Comparative cost analysis was performed using hospital billing information. The current study includes 64 idiopathic congenital clubfeet (19 bilateral cases). PAT was performed on 26 clubfeet under local anaesthetic in an outpatient setting, and 33 clubfeet under GA in a theatre setting. There was no significant difference for post-operative complications, or recurrence (p=0.67). Those in theatre group were exposed to a greater number of general anaesthetics before the age of four. Among practising New Zealand paediatric orthopaedic surgeons, 77.78% perform this in theatre under general anaesthesia, while only 22.22% perform PAT in outpatient clinic. The main barriers included concerns regarding pain control, concerns regarding incomplete release, concerns regarding distress to family and concerns regarding sterility. Parental satisfaction surveys found pain management to be excellent. Financial data was analysed and indicative costs were $6,061 NZD per procedure in theatre, compared to $378 NZD per procedure in clinic. PAT performed in a clinic setting is both safe and efficacious with results comparative to that performed in theatre. There was no difference in post-operative complications or recurrence. Parental satisfaction to this procedure is excellent. There are significant financial advantages. Based on this data, our institution now performs all releases in an outpatient setting.

  15. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm

    PubMed Central

    Wu, Haizhou; Luo, Qifang

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared. PMID:28105044

  16. Problems With Risk Reclassification Methods for Evaluating Prediction Models

    PubMed Central

    Pepe, Margaret S.

    2011-01-01

    For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714

  17. Analysis of visual quality improvements provided by known tools for HDR content

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo

    2016-09-01

    In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.

  18. Extracting BI-RADS Features from Portuguese Clinical Texts.

    PubMed

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser's performance is comparable to the manual method.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This paper is actually a composite of two papers dealing with automation and computerized control of underground mining equipment. The paper primarily discusses drills, haulage equipment, and tunneling machines. It compares performance and cost benefits of conventional equipment to the new automated methods. The company involved are iron ore mining companies in Scandinavia. The papers also discusses the different equipment using air power, water power, hydraulic power, and computer power. The different drill rigs are compared for performance and cost.

  20. An approximate theoretical method for modeling the static thrust performance of non-axisymmetric two-dimensional convergent-divergent nozzles. M.S. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.

    1995-01-01

    An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.

Top