Sample records for compare existing methods

  1. Novel Ultrasound Joint Selection Methods Using a Reduced Joint Number Demonstrate Inflammatory Improvement when Compared to Existing Methods and Disease Activity Score at 28 Joints.

    PubMed

    Tan, York Kiat; Allen, John C; Lye, Weng Kit; Conaghan, Philip G; D'Agostino, Maria Antonietta; Chew, Li-Ching; Thumboo, Julian

    2016-01-01

    A pilot study testing novel ultrasound (US) joint-selection methods in rheumatoid arthritis. Responsiveness of novel [individualized US (IUS) and individualized composite US (ICUS)] methods were compared with existing US methods and the Disease Activity Score at 28 joints (DAS28) for 12 patients followed for 3 months. IUS selected up to 7 and 12 most ultrasonographically inflamed joints, while ICUS additionally incorporated clinically symptomatic joints. The existing, IUS, and ICUS methods' standardized response means were -0.39, -1.08, and -1.11, respectively, for 7 joints; -0.49, -1.00, and -1.16, respectively, for 12 joints; and -0.94 for DAS28. Novel methods effectively demonstrate inflammatory improvement when compared with existing methods and DAS28.

  2. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  3. Dichotomous versus semi-quantitative scoring of ultrasound joint inflammation in rheumatoid arthritis using novel individualized joint selection methods.

    PubMed

    Tan, York Kiat; Allen, John C; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Thumboo, Julian

    2017-05-01

    The aim of the study is to compare the responsiveness of two joint inflammation scoring systems (dichotomous scoring (DS) versus semi-quantitative scoring (SQS)) using novel individualized ultrasound joint selection methods and existing ultrasound joint selection methods. Responsiveness measured by the standardized response means (SRMs) using the DS and the SQS system (for both the novel and existing ultrasound joint selection methods) was derived using the baseline and the 3-month total inflammatory scores from 20 rheumatoid arthritis patients. The relative SRM gain ratios (SRM-Gains) for both scoring system (DS and SQS) comparing the novel to the existing methods were computed. Both scoring systems (DS and SQS) demonstrated substantial SRM-Gains (ranged from 3.31 to 5.67 for the DS system and ranged from 1.82 to 3.26 for the SQS system). The SRMs using the novel methods ranged from 0.94 to 1.36 for the DS system and ranged from 0.89 to 1.11 for the SQS system. The SRMs using the existing methods ranged from 0.24 to 0.32 for the DS system and ranged from 0.34 to 0.49 for the SQS system. The DS system appears to achieve high responsiveness comparable to SQS for the novel individualized ultrasound joint selection methods.

  4. THE INFLUENCE OF PHYSICAL FACTORS ON COMPARATIVE PERFORMANCE OF SAMPLING METHODS IN LARGE RIVERS

    EPA Science Inventory

    In 1999, we compared five existing benthic macroinvertebrate sampling methods used in boatable rivers. Each sampling protocol was performed at each of 60 sites distributed among four rivers in the Ohio River drainage basin. Initial comparison of methods using key macroinvertebr...

  5. Efficient option valuation of single and double barrier options

    NASA Astrophysics Data System (ADS)

    Kabaivanov, Stanimir; Milev, Mariyan; Koleva-Petkova, Dessislava; Vladev, Veselin

    2017-12-01

    In this paper we present an implementation of pricing algorithm for single and double barrier options using Mellin transformation with Maximum Entropy Inversion and its suitability for real-world applications. A detailed analysis of the applied algorithm is accompanied by implementation in C++ that is then compared to existing solutions in terms of efficiency and computational power. We then compare the applied method with existing closed-form solutions and well known methods of pricing barrier options that are based on finite differences.

  6. Brain Network Regional Synchrony Analysis in Deafness

    PubMed Central

    Xu, Lei; Liang, Mao-Jin

    2018-01-01

    Deafness, the most common auditory disease, has greatly affected people for a long time. The major treatment for deafness is cochlear implantation (CI). However, till today, there is still a lack of objective and precise indicator serving as evaluation of the effectiveness of the cochlear implantation. The goal of this EEG-based study is to effectively distinguish CI children from those prelingual deafened children without cochlear implantation. The proposed method is based on the functional connectivity analysis, which focuses on the brain network regional synchrony. Specifically, we compute the functional connectivity between each channel pair first. Then, we quantify the brain network synchrony among regions of interests (ROIs), where both intraregional synchrony and interregional synchrony are computed. And finally the synchrony values are concatenated to form the feature vector for the SVM classifier. What is more, we develop a new ROI partition method of 128-channel EEG recording system. That is, both the existing ROI partition method and the proposed ROI partition method are used in the experiments. Compared with the existing EEG signal classification methods, our proposed method has achieved significant improvements as large as 87.20% and 86.30% when the existing ROI partition method and the proposed ROI partition method are used, respectively. It further demonstrates that the new ROI partition method is comparable to the existing ROI partition method. PMID:29854776

  7. Linnorm: improved statistical analysis for single cell RNA-seq expression data

    PubMed Central

    Yip, Shun H.; Wang, Panwen; Kocher, Jean-Pierre A.; Sham, Pak Chung

    2017-01-01

    Abstract Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. PMID:28981748

  8. Lipid Adjustment for Chemical Exposures: Accounting for Concomitant Variables

    PubMed Central

    Li, Daniel; Longnecker, Matthew P.; Dunson, David B.

    2013-01-01

    Background Some environmental chemical exposures are lipophilic and need to be adjusted by serum lipid levels before data analyses. There are currently various strategies that attempt to account for this problem, but all have their drawbacks. To address such concerns, we propose a new method that uses Box-Cox transformations and a simple Bayesian hierarchical model to adjust for lipophilic chemical exposures. Methods We compared our Box-Cox method to existing methods. We ran simulation studies in which increasing levels of lipid-adjusted chemical exposure did and did not increase the odds of having a disease, and we looked at both single-exposure and multiple-exposures cases. We also analyzed an epidemiology dataset that examined the effects of various chemical exposures on the risk of birth defects. Results Compared with existing methods, our Box-Cox method produced unbiased estimates, good coverage, similar power, and lower type-I error rates. This was the case in both single- and multiple-exposure simulation studies. Results from analysis of the birth-defect data differed from results using existing methods. Conclusion Our Box-Cox method is a novel and intuitive way to account for the lipophilic nature of certain chemical exposures. It addresses some of the problems with existing methods, is easily extendable to multiple exposures, and can be used in any analyses that involve concomitant variables. PMID:24051893

  9. Comparing biomarkers as principal surrogate endpoints.

    PubMed

    Huang, Ying; Gilbert, Peter B

    2011-12-01

    Recently a new definition of surrogate endpoint, the "principal surrogate," was proposed based on causal associations between treatment effects on the biomarker and on the clinical endpoint. Despite its appealing interpretation, limited research has been conducted to evaluate principal surrogates, and existing methods focus on risk models that consider a single biomarker. How to compare principal surrogate value of biomarkers or general risk models that consider multiple biomarkers remains an open research question. We propose to characterize a marker or risk model's principal surrogate value based on the distribution of risk difference between interventions. In addition, we propose a novel summary measure (the standardized total gain) that can be used to compare markers and to assess the incremental value of a new marker. We develop a semiparametric estimated-likelihood method to estimate the joint surrogate value of multiple biomarkers. This method accommodates two-phase sampling of biomarkers and is more widely applicable than existing nonparametric methods by incorporating continuous baseline covariates to predict the biomarker(s), and is more robust than existing parametric methods by leaving the error distribution of markers unspecified. The methodology is illustrated using a simulated example set and a real data set in the context of HIV vaccine trials. © 2011, The International Biometric Society.

  10. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  11. Linnorm: improved statistical analysis for single cell RNA-seq expression data.

    PubMed

    Yip, Shun H; Wang, Panwen; Kocher, Jean-Pierre A; Sham, Pak Chung; Wang, Junwen

    2017-12-15

    Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  13. Identifying Stakeholders and Their Preferences about NFR by Comparing Use Case Diagrams of Several Existing Systems

    NASA Astrophysics Data System (ADS)

    Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji

    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.

  14. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  16. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation.

    PubMed

    Jimeno-Yepes, Antonio J; McInnes, Bridget T; Aronson, Alan R

    2011-06-02

    Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE.We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.

  17. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  18. Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model

    NASA Astrophysics Data System (ADS)

    Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie

    2017-11-01

    Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.

  19. Evaluation of Traditional and Technology-Based Grocery Store Nutrition Education

    ERIC Educational Resources Information Center

    Schultz, Jennifer; Litchfield, Ruth

    2016-01-01

    Background: A literature gap exists for grocery interventions with realistic resource expectations; few technology-based publications exist, and none document traditional comparison. Purpose: Compare grocery store traditional aisle demonstrations (AD) and technology-based (TB) nutrition education treatments. Methods: A quasi-experimental 4-month…

  20. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    NASA Astrophysics Data System (ADS)

    Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.

    2017-05-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.

  1. Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting

    NASA Astrophysics Data System (ADS)

    Lin, Shih-Schön; Yemelyanov, Konstantin M.; Pugh, Edward N., Jr.; Engheta, Nader

    2006-09-01

    In forensic science the finger marks left unintentionally by people at a crime scene are referred to as latent fingerprints. Most existing techniques to detect and lift latent fingerprints require application of a certain material directly onto the exhibit. The chemical and physical processing applied to the fingerprint potentially degrades or prevents further forensic testing on the same evidence sample. Many existing methods also have deleterious side effects. We introduce a method to detect and extract latent fingerprint images without applying any powder or chemicals on the object. Our method is based on the optical phenomena of polarization and specular reflection together with the physiology of fingerprint formation. The recovered image quality is comparable to existing methods. In some cases, such as the sticky side of tape, our method shows unique advantages.

  2. Trends in heteroepitaxy of III-Vs on silicon for photonic and photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Lourdudoss, Sebastian; Junesand, Carl; Kataria, Himanshu; Metaferia, Wondwosen; Omanakuttan, Giriprasanth; Sun, Yan-Ting; Wang, Zhechao; Olsson, Fredrik

    2017-02-01

    We present and compare the existing methods of heteroepitaxy of III-Vs on silicon and their trends. We focus on the epitaxial lateral overgrowth (ELOG) method as a means of achieving good quality III-Vs on silicon. Initially conducted primarily by near-equilibrium epitaxial methods such as liquid phase epitaxy and hydride vapour phase epitaxy, nowadays ELOG is being carried out even by non-equilibrium methods such as metal organic vapour phase epitaxy. In the ELOG method, the intermediate defective seed and the mask layers still exist between the laterally grown purer III-V layer and silicon. In a modified ELOG method called corrugated epitaxial lateral overgrowth (CELOG) method, it is possible to obtain direct interface between the III-V layer and silicon. In this presentation we exemplify some recent results obtained by these techniques. We assess the potentials of these methods along with the other existing methods for realizing truly monolithic photonic integration on silicon and III-V/Si heterojunction solar cells.

  3. Novel scheme to compute chemical potentials of chain molecules on a lattice

    NASA Astrophysics Data System (ADS)

    Mooij, G. C. A. M.; Frenkel, D.

    We present a novel method that allows efficient computation of the total number of allowed conformations of a chain molecule in a dense phase. Using this method, it is possible to estimate the chemical potential of such a chain molecule. We have tested the present method in simulations of a two-dimensional monolayer of chain molecules on a lattice (Whittington-Chapman model) and compared it with existing schemes to compute the chemical potential. We find that the present approach is two to three orders of magnitude faster than the most efficient of the existing methods.

  4. A COMPARISON OF SIX BENTHIC MACROINVERTEBRATE SAMPLING METHODS IN FOUR LARGE RIVERS

    EPA Science Inventory

    In 1999, a study was conducted to compare six macroinvertebrate sampling methods in four large (boatable) rivers that drain into the Ohio River. Two methods each were adapted from existing methods used by the USEPA, USGS and Ohio EPA. Drift nets were unable to collect a suffici...

  5. Method of preliminary localization of the iris in biometric access control systems

    NASA Astrophysics Data System (ADS)

    Minacova, N.; Petrov, I.

    2015-10-01

    This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.

  6. "It's the Method, Stupid." Interrelations between Methodological and Theoretical Advances: The Example of Comparing Higher Education Systems Internationally

    ERIC Educational Resources Information Center

    Hoelscher, Michael

    2017-01-01

    This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…

  7. Comparison of Available Soil Nitrogen Assays in Control and Burned Forested Sites

    Treesearch

    Jennifer D. Knoepp; Wayne T. Swank

    1995-01-01

    The existence of several different methods for measuring net Nmineralization and nitrilkation rates and indexing N availability has raised questions about the comparability of these methods. We compared in situ covered cores, in situ buried bags, aerobic laboratory incubations, and tension lysimetry on control and treated plots of a prescribed burn experiment in the...

  8. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation

    PubMed Central

    2011-01-01

    Background Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. Methods In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. Results The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE. We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. Conclusions The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions. PMID:21635749

  9. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  10. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  11. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  12. Multiple network alignment via multiMAGNA+.

    PubMed

    Vijayan, Vipin; Milenkovic, Tijana

    2017-08-21

    Network alignment (NA) aims to find a node mapping that identifies topologically or functionally similar network regions between molecular networks of different species. Analogous to genomic sequence alignment, NA can be used to transfer biological knowledge from well- to poorly-studied species between aligned network regions. Pairwise NA (PNA) finds similar regions between two networks while multiple NA (MNA) can align more than two networks. We focus on MNA. Existing MNA methods aim to maximize total similarity over all aligned nodes (node conservation). Then, they evaluate alignment quality by measuring the amount of conserved edges, but only after the alignment is constructed. Directly optimizing edge conservation during alignment construction in addition to node conservation may result in superior alignments. Thus, we present a novel MNA method called multiMAGNA++ that can achieve this. Indeed, multiMAGNA++ outperforms or is on par with existing MNA methods, while often completing faster than existing methods. That is, multiMAGNA++ scales well to larger network data and can be parallelized effectively. During method evaluation, we also introduce new MNA quality measures to allow for more fair MNA method comparison compared to the existing alignment quality measures. MultiMAGNA++ code is available on the method's web page at http://nd.edu/~cone/multiMAGNA++/.

  13. Linear regression based on Minimum Covariance Determinant (MCD) and TELBS methods on the productivity of phytoplankton

    NASA Astrophysics Data System (ADS)

    Gusriani, N.; Firdaniza

    2018-03-01

    The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.

  14. Prediction of protein structural classes by recurrence quantification analysis based on chaos game representation.

    PubMed

    Yang, Jian-Yi; Peng, Zhen-Ling; Yu, Zu-Guo; Zhang, Rui-Jie; Anh, Vo; Wang, Desheng

    2009-04-21

    In this paper, we intend to predict protein structural classes (alpha, beta, alpha+beta, or alpha/beta) for low-homology data sets. Two data sets were used widely, 1189 (containing 1092 proteins) and 25PDB (containing 1673 proteins) with sequence homology being 40% and 25%, respectively. We propose to decompose the chaos game representation of proteins into two kinds of time series. Then, a novel and powerful nonlinear analysis technique, recurrence quantification analysis (RQA), is applied to analyze these time series. For a given protein sequence, a total of 16 characteristic parameters can be calculated with RQA, which are treated as feature representation of protein sequences. Based on such feature representation, the structural class for each protein is predicted with Fisher's linear discriminant algorithm. The jackknife test is used to test and compare our method with other existing methods. The overall accuracies with step-by-step procedure are 65.8% and 64.2% for 1189 and 25PDB data sets, respectively. With one-against-others procedure used widely, we compare our method with five other existing methods. Especially, the overall accuracies of our method are 6.3% and 4.1% higher for the two data sets, respectively. Furthermore, only 16 parameters are used in our method, which is less than that used by other methods. This suggests that the current method may play a complementary role to the existing methods and is promising to perform the prediction of protein structural classes.

  15. A Coarse-Grained Elastic Network Atom Contact Model and Its Use in the Simulation of Protein Dynamics and the Prediction of the Effect of Mutations

    PubMed Central

    Frappier, Vincent; Najmanovich, Rafael J.

    2014-01-01

    Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569

  16. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    NASA Astrophysics Data System (ADS)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  17. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  18. Patient, staff and physician satisfaction: a new model, instrument and their implications.

    PubMed

    York, Anne S; McCarthy, Kim A

    2011-01-01

    Customer satisfaction's importance is well-documented in the marketing literature and is rapidly gaining wide acceptance in the healthcare industry. The purpose of this paper is to introduce a new customer-satisfaction measuring method - Reichheld's ultimate question - and compare it with traditional techniques using data gathered from four healthcare clinics. A new survey method, called the ultimate question, was used to collect patient satisfaction data. It was subsequently compared with the data collected via an existing method. Findings suggest that the ultimate question provides similar ratings to existing models at lower costs. A relatively small sample size may affect the generalizability of the results; it is also possible that potential spill-over effects exist owing to two patient satisfaction surveys administered at the same time. This new ultimate question method greatly improves the process and ease with which hospital or clinic administrators are able to collect patient (as well as staff and physician) satisfaction data in healthcare settings. Also, the feedback gained from this method is actionable and can be used to make strategic improvements that will impact business and ultimately increase profitability. The paper's real value is pinpointing specific quality improvement areas based not just on patient ratings but also physician and staff satisfaction, which often underlie patients' clinical experiences.

  19. Privacy Protection by Matrix Transformation

    NASA Astrophysics Data System (ADS)

    Yang, Weijia

    Privacy preserving is indispensable in data mining. In this paper, we present a novel clustering method for distributed multi-party data sets using orthogonal transformation and data randomization techniques. Our method can not only protect privacy in face of collusion, but also achieve a higher level of accuracy compared to the existing methods.

  20. Bridge Condition Assessment Using D Numbers

    PubMed Central

    Hu, Yong

    2014-01-01

    Bridge condition assessment is a complex problem influenced by many factors. The uncertain environment increases more its complexity. Due to the uncertainty in the process of assessment, one of the key problems is the representation of assessment results. Though there exists many methods that can deal with uncertain information, however, they have more or less deficiencies. In this paper, a new representation of uncertain information, called D numbers, is presented. It extends the Dempster-Shafer theory. By using D numbers, a new method is developed for the bridge condition assessment. Compared to these existing methods, the proposed method is simpler and more effective. An illustrative case is given to show the effectiveness of the new method. PMID:24696639

  1. Pavement crack detection combining non-negative feature with fast LoG in complex scene

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu

    2015-12-01

    Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.

  2. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  3. Comparison of bulk sediment and sediment elutriate toxicity testing methods

    EPA Science Inventory

    Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...

  4. MBMC: An Effective Markov Chain Approach for Binning Metagenomic Reads from Environmental Shotgun Sequencing Projects.

    PubMed

    Wang, Ying; Hu, Haiyan; Li, Xiaoman

    2016-08-01

    Metagenomics is a next-generation omics field currently impacting postgenomic life sciences and medicine. Binning metagenomic reads is essential for the understanding of microbial function, compositions, and interactions in given environments. Despite the existence of dozens of computational methods for metagenomic read binning, it is still very challenging to bin reads. This is especially true for reads from unknown species, from species with similar abundance, and/or from low-abundance species in environmental samples. In this study, we developed a novel taxonomy-dependent and alignment-free approach called MBMC (Metagenomic Binning by Markov Chains). Different from all existing methods, MBMC bins reads by measuring the similarity of reads to the trained Markov chains for different taxa instead of directly comparing reads with known genomic sequences. By testing on more than 24 simulated and experimental datasets with species of similar abundance, species of low abundance, and/or unknown species, we report here that MBMC reliably grouped reads from different species into separate bins. Compared with four existing approaches, we demonstrated that the performance of MBMC was comparable with existing approaches when binning reads from sequenced species, and superior to existing approaches when binning reads from unknown species. MBMC is a pivotal tool for binning metagenomic reads in the current era of Big Data and postgenomic integrative biology. The MBMC software can be freely downloaded at http://hulab.ucf.edu/research/projects/metagenomics/MBMC.html .

  5. Estimating two indirect logging costs caused by accelerated erosion.

    Treesearch

    Glen O. Klock

    1976-01-01

    In forest areas where high soil erosion potential exists, a comparative yarding cost estimate, including the indirect costs determined by methods proposed here, shows that the total cost of using "advanced" logging methods may be less than that of "traditional" systems.

  6. Enhanced graphene oxide membranes and methods for making same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Yongsoon; Gotthold, David W.; Fifield, Leonard S.

    A method for making a graphene oxide membrane and a resulting free-standing graphene oxide membrane that provides desired qualities of water permeability and selectivity at larger sizes, thinner cross sections, and with increased ruggedness as compared to existing membranes and processes.

  7. An interlaboratory comparison of sediment elutriate preparation and toxicity test methods

    EPA Science Inventory

    Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...

  8. Q-Method Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.

    2012-01-01

    A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.

  9. A simple method for plasma total vitamin C analysis suitable for routine clinical laboratory use.

    PubMed

    Robitaille, Line; Hoffer, L John

    2016-04-21

    In-hospital hypovitaminosis C is highly prevalent but almost completely unrecognized. Medical awareness of this potentially important disorder is hindered by the inability of most hospital laboratories to determine plasma vitamin C concentrations. The availability of a simple, reliable method for analyzing plasma vitamin C could increase opportunities for routine plasma vitamin C analysis in clinical medicine. Plasma vitamin C can be analyzed by high performance liquid chromatography (HPLC) with electrochemical (EC) or ultraviolet (UV) light detection. We modified existing UV-HPLC methods for plasma total vitamin C analysis (the sum of ascorbic and dehydroascorbic acid) to develop a simple, constant-low-pH sample reduction procedure followed by isocratic reverse-phase HPLC separation using a purely aqueous low-pH non-buffered mobile phase. Although EC-HPLC is widely recommended over UV-HPLC for plasma total vitamin C analysis, the two methods have never been directly compared. We formally compared the simplified UV-HPLC method with EC-HPLC in 80 consecutive clinical samples. The simplified UV-HPLC method was less expensive, easier to set up, required fewer reagents and no pH adjustments, and demonstrated greater sample stability than many existing methods for plasma vitamin C analysis. When compared with the gold-standard EC-HPLC method in 80 consecutive clinical samples exhibiting a wide range of plasma vitamin C concentrations, it performed equivalently. The easy set up, simplicity and sensitivity of the plasma vitamin C analysis method described here could make it practical in a normally equipped hospital laboratory. Unlike any prior UV-HPLC method for plasma total vitamin C analysis, it was rigorously compared with the gold-standard EC-HPLC method and performed equivalently. Adoption of this method could increase the availability of plasma vitamin C analysis in clinical medicine.

  10. Comparing and improving reconstruction methods for proxies based on compositional data

    NASA Astrophysics Data System (ADS)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  11. A Method for Evaluating Outcomes of Restoration When No Reference Sites Exist

    Treesearch

    J. Stephen Brewer; Timothy Menzel

    2009-01-01

    Ecological restoration typically seeks to shift species composition toward that of existing reference sites. Yet, comparing the assemblages in restored and reference habitats assumes that similarity to the reference habitat is the optimal outcome of restoration and does not provide a perspective on regionally rare off-site species. When no such reference assemblages of...

  12. Efficient sidelobe ASK based dual-function radar-communications

    NASA Astrophysics Data System (ADS)

    Hassanien, Aboulnasr; Amin, Moeness G.; Zhang, Yimin D.; Ahmad, Fauzia

    2016-05-01

    Recently, dual-function radar-communications (DFRC) has been proposed as means to mitigate the spectrum congestion problem. Existing amplitude-shift keying (ASK) methods for information embedding do not take full advantage of the highest permissable sidelobe level. In this paper, a new ASK-based signaling strategy for enhancing the signal-to-noise ratio (SNR) at the communication receiver is proposed. The proposed method employs one reference waveform and simultaneously transmits a number of orthogonal waveforms equals to the number of 1's in the binary sequence being embedded. 3 dB SNR gain is achieved using the proposed method as compared to existing sidelobe ASK methods. The effectiveness of the proposed information embedding strategy is verified using simulations examples.

  13. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  14. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  15. CLT and AE methods of in-situ load testing : comparison and development of evaluation criteria : in-situ evaluation of post-tensioned parking garage, Kansas City, Missouri

    DOT National Transportation Integrated Search

    2008-02-01

    The objective of the proposed research project is to compare the results of two recently introduced nondestructive load test methods to the existing 24-hour load test method described in Chapter 20 of ACI 318-05. The two new methods of nondestructive...

  16. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  17. Droplet Microarray Based on Superhydrophobic-Superhydrophilic Patterns for Single Cell Analysis.

    PubMed

    Jogia, Gabriella E; Tronser, Tina; Popova, Anna A; Levkin, Pavel A

    2016-12-09

    Single-cell analysis provides fundamental information on individual cell response to different environmental cues and is a growing interest in cancer and stem cell research. However, current existing methods are still facing challenges in performing such analysis in a high-throughput manner whilst being cost-effective. Here we established the Droplet Microarray (DMA) as a miniaturized screening platform for high-throughput single-cell analysis. Using the method of limited dilution and varying cell density and seeding time, we optimized the distribution of single cells on the DMA. We established culturing conditions for single cells in individual droplets on DMA obtaining the survival of nearly 100% of single cells and doubling time of single cells comparable with that of cells cultured in bulk cell population using conventional methods. Our results demonstrate that the DMA is a suitable platform for single-cell analysis, which carries a number of advantages compared with existing technologies allowing for treatment, staining and spot-to-spot analysis of single cells over time using conventional analysis methods such as microscopy.

  18. Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.

    PubMed

    Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi

    2018-06-01

    A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.

  19. Quadcopter Control Using Speech Recognition

    NASA Astrophysics Data System (ADS)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  20. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  1. Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    NASA Astrophysics Data System (ADS)

    Vu, Hai; Echigo, Tomio; Sagawa, Ryusuke; Yagi, Keiko; Shiba, Masatsugu; Higuchi, Kazuhide; Arakawa, Tetsuo; Yagi, Yasushi

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 ± minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  2. A new collage steganographic algorithm using cartoon design

    NASA Astrophysics Data System (ADS)

    Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip

    2014-02-01

    Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.

  3. A SCIENTIFIC AND TECHNOLOGICAL FRAMEWORK FOR EVALUATING COMPARATIVE RISK IN ECOLOGICAL RISK ASSESSMENTS

    EPA Science Inventory

    There are significant scientific and technological challenges to managing natural resources. Data needs are cited as an obvious limitation, but there exist more fundamental scientific issues. What is still needed is a method of comparing management strategies based on projected i...

  4. Characterization of background concentrations of contaminants using a mixture of normal distributions.

    PubMed

    Qian, Song S; Lyons, Regan E

    2006-10-01

    We present a Bayesian approach for characterizing background contaminant concentration distributions using data from sites that may have been contaminated. Our method, focused on estimation, resolves several technical problems of the existing methods sanctioned by the U.S. Environmental Protection Agency (USEPA) (a hypothesis testing based method), resulting in a simple and quick procedure for estimating background contaminant concentrations. The proposed Bayesian method is applied to two data sets from a federal facility regulated under the Resource Conservation and Restoration Act. The results are compared to background distributions identified using existing methods recommended by the USEPA. The two data sets represent low and moderate levels of censorship in the data. Although an unbiased estimator is elusive, we show that the proposed Bayesian estimation method will have a smaller bias than the EPA recommended method.

  5. New method to enhance the extraction yield of rutin from Sophora japonica using a novel ultrasonic extraction system by determining optimum ultrasonic frequency.

    PubMed

    Liao, Jianqing; Qu, Baida; Liu, Da; Zheng, Naiqin

    2015-11-01

    A new method has been proposed for enhancing extraction yield of rutin from Sophora japonica, in which a novel ultrasonic extraction system has been developed to perform the determination of optimum ultrasonic frequency by a two-step procedure. This study has systematically investigated the influence of a continuous frequency range of 20-92 kHz on rutin yields. The effects of different operating conditions on rutin yields have also been studied in detail such as solvent concentration, solvent to solid ratio, ultrasound power, temperature and particle size. A higher extraction yield was obtained at the ultrasonic frequency of 60-62 kHz which was little affected under other extraction conditions. Comparative studies between existing methods and the present method were done to verify the effectiveness of this method. Results indicated that the new extraction method gave a higher extraction yield compared with existing ultrasound-assisted extraction (UAE) and soxhlet extraction (SE). Thus, the potential use of this method may be promising for extraction of natural materials on an industrial scale in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Hot-stage microscopy for determination of API particles in a formulated tablet.

    PubMed

    Simek, Michal; Grünwaldová, Veronika; Kratochvíl, Bohumil

    2014-01-01

    Although methods exist to readily determine the particle size distribution (PSD) of an active pharmaceutical ingredient (API) before its formulation into a final product, the primary challenge is to develop a method to determine the PSD of APIs in a finished tablet. To address the limitations of existing PSD methods, we used hot-stage microscopy to observe tablet disintegration during temperature change and, thus, reveal the API particles in a tablet. Both mechanical and liquid disintegration were evaluated after we had identified optimum milling time for mechanical disintegration and optimum volume of water for liquid disintegration. In each case, hot-stage micrographs, taken before and after the API melting point, were compared with image analysis software to obtain the PSDs. Then, the PSDs of the APIs from the disintegrated tablets were compared with the PSDs of raw APIs. Good agreement was obtained, thereby confirming the robustness of our methodology. The availability of such a method equips pharmaceutical scientists with an in vitro assessment method that will more reliably determine the PSD of active substances in finished tablets.

  7. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  8. A PILOT STUDY TO COMPARE MICROBIAL AND CHEMICAL INDICATORS OF HUMAN FECAL CONTAMINATION IN WATER

    EPA Science Inventory

    Limitations exist in applying traditional microbial methods for the detection of human fecal contamination of water. A pilot study was undertaken to compare the microbial and chemical indicators of human fecal contamination of water. Sixty-four water samples were collected in O...

  9. An automated and universal method for measuring mean grain size from a digital image of sediment

    USGS Publications Warehouse

    Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.

    2010-01-01

    Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.

  10. Stored grain pack factors for wheat: comparison of three methods to field measurements

    USDA-ARS?s Scientific Manuscript database

    Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...

  11. Advanced bridge safety initiative : recommended practices for live load testing of existing flat-slab concrete bridges - task 5.

    DOT National Transportation Integrated Search

    2012-12-01

    Current AASHTO provisions for load rating flat-slab concrete bridges use the equivalent strip : width method, which is regarded as overly conservative compared to more advanced analysis : methods and field live load testing. It has been shown that li...

  12. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  13. Hole filling with oriented sticks in ultrasound volume reconstruction

    PubMed Central

    Vaughan, Thomas; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor

    2015-01-01

    Abstract. Volumes reconstructed from tracked planar ultrasound images often contain regions where no information was recorded. Existing interpolation methods introduce image artifacts and tend to be slow in filling large missing regions. Our goal was to develop a computationally efficient method that fills missing regions while adequately preserving image features. We use directional sticks to interpolate between pairs of known opposing voxels in nearby images. We tested our method on 30 volumetric ultrasound scans acquired from human subjects, and compared its performance to that of other published hole-filling methods. Reconstruction accuracy, fidelity, and time were improved compared with other methods. PMID:26839907

  14. BAYESIAN META-ANALYSIS ON MEDICAL DEVICES: APPLICATION TO IMPLANTABLE CARDIOVERTER DEFIBRILLATORS

    PubMed Central

    Youn, Ji-Hee; Lord, Joanne; Hemming, Karla; Girling, Alan; Buxton, Martin

    2012-01-01

    Objectives: The aim of this study is to describe and illustrate a method to obtain early estimates of the effectiveness of a new version of a medical device. Methods: In the absence of empirical data, expert opinion may be elicited on the expected difference between the conventional and modified devices. Bayesian Mixed Treatment Comparison (MTC) meta-analysis can then be used to combine this expert opinion with existing trial data on earlier versions of the device. We illustrate this approach for a new four-pole implantable cardioverter defibrillator (ICD) compared with conventional ICDs, Class III anti-arrhythmic drugs, and conventional drug therapy for the prevention of sudden cardiac death in high risk patients. Existing RCTs were identified from a published systematic review, and we elicited opinion on the difference between four-pole and conventional ICDs from experts recruited at a cardiology conference. Results: Twelve randomized controlled trials were identified. Seven experts provided valid probability distributions for the new ICDs compared with current devices. The MTC model resulted in estimated relative risks of mortality of 0.74 (0.60–0.89) (predictive relative risk [RR] = 0.77 [0.41–1.26]) and 0.83 (0.70–0.97) (predictive RR = 0.84 [0.55–1.22]) with the new ICD therapy compared to Class III anti-arrhythmic drug therapy and conventional drug therapy, respectively. These results showed negligible differences from the preliminary results for the existing ICDs. Conclusions: The proposed method incorporating expert opinion to adjust for a modification made to an existing device may play a useful role in assisting decision makers to make early informed judgments on the effectiveness of frequently modified healthcare technologies. PMID:22559753

  15. A sampling framework for incorporating quantitative mass spectrometry data in protein interaction analysis.

    PubMed

    Tucker, George; Loh, Po-Ru; Berger, Bonnie

    2013-10-04

    Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.

  16. Measuring the performance of livability programs.

    DOT National Transportation Integrated Search

    2013-07-01

    This report analyzes the performance measurement processes adopted by five large livability programs throughout the United States. It compares and contrasts these programs by examining existing research in performance measurement methods. The ...

  17. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  18. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  19. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  20. Lessons from comparative effectiveness research methods development projects funded under the Recovery Act.

    PubMed

    Zurovac, Jelena; Esposito, Dominick

    2014-11-01

    The American Recovery and Reinvestment Act of 2009 (ARRA) directed nearly US$29.2 million to comparative effectiveness research (CER) methods development. To help inform future CER methods investments, we describe the ARRA CER methods projects, identify barriers to this research and discuss the alignment of topics with published methods development priorities. We used several existing resources and held discussions with ARRA CER methods investigators. Although funded projects explored many identified priority topics, investigators noted that much work remains. For example, given the considerable investments in CER data infrastructure, the methods development field can benefit from additional efforts to educate researchers about the availability of new data sources and about how best to apply methods to match their research questions and data.

  1. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  2. Tchebichef moment based restoration of Gaussian blurred images.

    PubMed

    Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C

    2016-11-10

    With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.

  3. Plasma Cleaning

    NASA Technical Reports Server (NTRS)

    Hintze, Paul E.

    2016-01-01

    NASA's Kennedy Space Center has developed two solvent-free precision cleaning techniques: plasma cleaning and supercritical carbon dioxide (SCCO2), that has equal performance, cost parity, and no environmental liability, as compared to existing solvent cleaning methods.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherman Rydhög, Jonas, E-mail: per.jonas.scherman.rydhoeg@regionh.dk; Munck af Rosenschöld, Per; Irming Jølck, Rasmus

    Purpose: A new biodegradable liquid fiducial marker was devised to allow for easy insertion in lung tumors using thin needles. The purpose of this study was to evaluate the visibility of the liquid fiducial markers for image-guided radiation therapy and compare to existing solid fiducial markers and to one existing liquid fiducial marker currently commercially available. Methods: Fiducial marker visibility was quantified in terms of contrast to noise ratio (CNR) on planar kilovoltage x-ray images in a thorax phantom for different concentrations of the radio-opaque component of the new liquid fiducial marker, four solid fiducial markers, and one existing liquidmore » fiducial marker. Additionally, the image artifacts produced on computer tomography (CT) and cone-beam CT (CBCT) of all fiducial markers were quantified. Results: The authors found that the new liquid fiducial marker with the highest concentration of the radio-opaque component had a CNR > 2.05 for 62/63 exposures, which compared favorably to the existing solid fiducial markers and to the existing liquid fiducial marker evaluated. On CT and CBCT, the new liquid fiducial marker with the highest concentration produced lower streaking index artifact (30 and 14, respectively) than the solid gold markers (113 and 20, respectively) and the existing liquid fiducial marker (39 and 20, respectively). The size of the image artifact was larger for all of the liquid fiducial markers compared to the solid fiducial markers because of their larger physical size. Conclusions: The visibility and the image artifacts produced by the new liquid fiducial markers were comparable to existing solid fiducial markers and the existing liquid fiducial marker. The authors conclude that the new liquid fiducial marker represents an alternative to the fiducial markers tested.« less

  5. Deep learning methods for protein torsion angle prediction.

    PubMed

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  6. A comparative study of ChIP-seq sequencing library preparation methods.

    PubMed

    Sundaram, Arvind Y M; Hughes, Timothy; Biondi, Shea; Bolduc, Nathalie; Bowman, Sarah K; Camilli, Andrew; Chew, Yap C; Couture, Catherine; Farmer, Andrew; Jerome, John P; Lazinski, David W; McUsic, Andrew; Peng, Xu; Shazand, Kamran; Xu, Feng; Lyle, Robert; Gilfillan, Gregor D

    2016-10-21

    ChIP-seq is the primary technique used to investigate genome-wide protein-DNA interactions. As part of this procedure, immunoprecipitated DNA must undergo "library preparation" to enable subsequent high-throughput sequencing. To facilitate the analysis of biopsy samples and rare cell populations, there has been a recent proliferation of methods allowing sequencing library preparation from low-input DNA amounts. However, little information exists on the relative merits, performance, comparability and biases inherent to these procedures. Notably, recently developed single-cell ChIP procedures employing microfluidics must also employ library preparation reagents to allow downstream sequencing. In this study, seven methods designed for low-input DNA/ChIP-seq sample preparation (Accel-NGS® 2S, Bowman-method, HTML-PCR, SeqPlex™, DNA SMART™, TELP and ThruPLEX®) were performed on five replicates of 1 ng and 0.1 ng input H3K4me3 ChIP material, and compared to a "gold standard" reference PCR-free dataset. The performance of each method was examined for the prevalence of unmappable reads, amplification-derived duplicate reads, reproducibility, and for the sensitivity and specificity of peak calling. We identified consistent high performance in a subset of the tested reagents, which should aid researchers in choosing the most appropriate reagents for their studies. Furthermore, we expect this work to drive future advances by identifying and encouraging use of the most promising methods and reagents. The results may also aid judgements on how comparable are existing datasets that have been prepared with different sample library preparation reagents.

  7. Walking on a user similarity network towards personalized recommendations.

    PubMed

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  8. An improved conjugate gradient scheme to the solution of least squares SVM.

    PubMed

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  9. An Examination of Alternative Multidimensional Scaling Techniques

    ERIC Educational Resources Information Center

    Papazoglou, Sofia; Mylonas, Kostas

    2017-01-01

    The purpose of this study is to compare alternative multidimensional scaling (MDS) methods for constraining the stimuli on the circumference of a circle and on the surface of a sphere. Specifically, the existing MDS-T method for plotting the stimuli on the circumference of a circle is applied, and its extension is proposed for constraining the…

  10. Race and Ethnicity in Research Methods. Sage Focus Editions, Volume 157.

    ERIC Educational Resources Information Center

    Stanfield, John H., II, Ed.; Dennis, Rutledge M., Ed.

    The contributions in this volume examine the array of methods used in quantitative, qualitative, and comparative and historical research to show how research sensitive to ethnic issues can best be conducted. Rethinking and revising traditional methodologies and applying new ones can portray racial and ethnic issues as they really exist. The…

  11. A Probability Based Framework for Testing the Missing Data Mechanism

    ERIC Educational Resources Information Center

    Lin, Johnny Cheng-Han

    2013-01-01

    Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…

  12. Estrogen transport in surface runoff from agricultural fields treated with two different application methods of dairy manure

    USDA-ARS?s Scientific Manuscript database

    While the land-application of animal manure provides many benefits, concerns exist regarding the subsequent transport of hormones and potential effects on aquatic ecosystems. This study compares two methods of dairy manure application, surface broadcasting and shallow disk injection, on the fate and...

  13. Adaptive Fading Memory H∞ Filter Design for Compensation of Delayed Components in Self Powered Flux Detectors

    NASA Astrophysics Data System (ADS)

    Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol

    2015-08-01

    The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.

  14. Increasing family planning in Myanmar: the role of the private sector and social franchise programs.

    PubMed

    Aung, Tin; Hom, Nang Mo; Sudhinaraset, May

    2017-07-01

    This study examines the influence of clinical social franchise program on modern contraceptive use. This was a cross-sectional survey of contraceptive use among 2390 currently married women across 25 townships in Myanmar in 2014. Social franchise program measures were from programmatic records. Multivariable models show that women who lived in communities with at least 1-5 years of a clinical social franchise intrauterine device (IUD) program had 4.770 higher odds of using a modern contraceptive method compared to women living in communities with no IUD program [CI: 3.739-6.084]. Townships where the reproductive health program had existed for at least 10 years had 1.428 higher odds of reporting modern method use compared to women living in townships where the programs had existed for less than 10 years [CI: 1.016-2.008]. This study found consistent and robust evidence for an increase in family planning methods over program duration as well as intensity of social franchise programs.

  15. Probabilistic segmentation and intensity estimation for microarray images.

    PubMed

    Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro

    2006-01-01

    We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.

  16. Faster Mass Spectrometry-based Protein Inference: Junction Trees are More Efficient than Sampling and Marginalization by Enumeration

    PubMed Central

    Serang, Oliver; Noble, William Stafford

    2012-01-01

    The problem of identifying the proteins in a complex mixture using tandem mass spectrometry can be framed as an inference problem on a graph that connects peptides to proteins. Several existing protein identification methods make use of statistical inference methods for graphical models, including expectation maximization, Markov chain Monte Carlo, and full marginalization coupled with approximation heuristics. We show that, for this problem, the majority of the cost of inference usually comes from a few highly connected subgraphs. Furthermore, we evaluate three different statistical inference methods using a common graphical model, and we demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods. The python code used for this paper is available at http://noble.gs.washington.edu/proj/fido. PMID:22331862

  17. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  18. A Novel Evaluation Model for the Vehicle Navigation Device Market Using Hybrid MCDM Techniques

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Li; Hsieh, Meng-Shu; Tzeng, Gwo-Hshiung

    The developing strategy of ND is also presented to initiate the product roadmap. Criteria for evaluation are constructed via reviewing papers, interviewing experts and brain-storming. The ISM (interpretive structural modeling) method was used to construct the relationship between each criterion. The existing NDs were sampled to benchmark the gap between the consumer’s aspired/desired utilities with respect to the utilities of existing/developing NDs. The VIKOR method was applied to rank the sampled NDs. This paper will propose the key driving criteria of purchasing new ND and compare the consumer behavior of various characters. Those conclusions can be served as a reference for ND producers for improving existing functions or planning further utilities in the next e-era ND generation.

  19. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  20. Critical Analysis of Existing Recyclability Assessment Methods for New Products in Order to Define a Reference Method

    NASA Astrophysics Data System (ADS)

    Maris, E.; Froelich, D.

    The designers of products subject to the European regulations on waste have an obligation to improve the recyclability of their products from the very first design stages. The statutory texts refer to ISO standard 22 628, which proposes a method to calculate vehicle recyclability. There are several scientific studies that propose other calculation methods as well. Yet the feedback from the CREER club, a group of manufacturers and suppliers expert in ecodesign and recycling, is that the product recyclability calculation method proposed in this standard is not satisfactory, since only a mass indicator is used, the calculation scope is not clearly defined, and common data on the recycling industry does not exist to allow comparable calculations to be made for different products. For these reasons, it is difficult for manufacturers to have access to a method and common data for calculation purposes.

  1. Leveraging transcript quantification for fast computation of alternative splicing profiles.

    PubMed

    Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo

    2015-09-01

    Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  2. FGWAS: Functional genome wide association analysis.

    PubMed

    Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu

    2017-10-01

    Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    PubMed

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.

  4. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  5. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  6. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  7. A family of conjugate gradient methods for large-scale nonlinear equations.

    PubMed

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  8. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  9. Identification of influential users by neighbors in online social networks

    NASA Astrophysics Data System (ADS)

    Sheikhahmadi, Amir; Nematbakhsh, Mohammad Ali; Zareie, Ahmad

    2017-11-01

    Identification and ranking of influential users in social networks for the sake of news spreading and advertising has recently become an attractive field of research. Given the large number of users in social networks and also the various relations that exist among them, providing an effective method to identify influential users has been gradually considered as an essential factor. In most of the already-provided methods, those users who are located in an appropriate structural position of the network are regarded as influential users. These methods do not usually pay attention to the interactions among users, and also consider those relations as being binary in nature. This paper, therefore, proposes a new method to identify influential users in a social network by considering those interactions that exist among the users. Since users tend to act within the frame of communities, the network is initially divided into different communities. Then the amount of interaction among users is used as a parameter to set the weight of relations existing within the network. Afterward, by determining the neighbors' role for each user, a two-level method is proposed for both detecting users' influence and also ranking them. Simulation and experimental results on twitter data shows that those users who are selected by the proposed method, comparing to other existing ones, are distributed in a more appropriate distance. Moreover, the proposed method outperforms the other ones in terms of both the influential speed and capacity of the users it selects.

  10. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  11. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  12. Study on the application of ambient vibration tests to evaluate the effectiveness of seismic retrofitting

    NASA Astrophysics Data System (ADS)

    Liang, Li; Takaaki, Ohkubo; Guang-hui, Li

    2018-03-01

    In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.

  13. A Hybrid Method to Estimate Specific Differential Phase and Rainfall With Linear Programming and Physics Constraints

    DOE PAGES

    Huang, Hao; Zhang, Guifu; Zhao, Kun; ...

    2016-10-20

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  14. Findings from the 2012 West Virginia Online Writing Scoring Comparability Study

    ERIC Educational Resources Information Center

    Hixson, Nate; Rhudy, Vaughn

    2013-01-01

    Student responses to the West Virginia Educational Standards Test (WESTEST) 2 Online Writing Assessment are scored by a computer-scoring engine. The scoring method is not widely understood among educators, and there exists a misperception that it is not comparable to hand scoring. To address these issues, the West Virginia Department of Education…

  15. Generation 1.5 Written Error Patterns: A Comparative Study

    ERIC Educational Resources Information Center

    Doolan, Stephen M.; Miller, Donald

    2012-01-01

    In an attempt to contribute to existing research on Generation 1.5 students, the current study uses quantitative and qualitative methods to compare error patterns in a corpus of Generation 1.5, L1, and L2 community college student writing. This error analysis provides one important way to determine if error patterns in Generation 1.5 student…

  16. Methods for the Joint Meta-Analysis of Multiple Tests

    ERIC Educational Resources Information Center

    Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.

    2014-01-01

    Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…

  17. Effect of Blast-Induced Vibration from New Railway Tunnel on Existing Adjacent Railway Tunnel in Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Liang, Qingguo; Li, Jie; Li, Dewu; Ou, Erfeng

    2013-01-01

    The vibrations of existing service tunnels induced by blast-excavation of adjacent tunnels have attracted much attention from both academics and engineers during recent decades in China. The blasting vibration velocity (BVV) is the most widely used controlling index for in situ monitoring and safety assessment of existing lining structures. Although numerous in situ tests and simulations had been carried out to investigate blast-induced vibrations of existing tunnels due to excavation of new tunnels (mostly by bench excavation method), research on the overall dynamical response of existing service tunnels in terms of not only BVV but also stress/strain seemed limited for new tunnels excavated by the full-section blasting method. In this paper, the impacts of blast-induced vibrations from a new tunnel on an existing railway tunnel in Xinjiang, China were comprehensively investigated by using laboratory tests, in situ monitoring and numerical simulations. The measured data from laboratory tests and in situ monitoring were used to determine the parameters needed for numerical simulations, and were compared with the calculated results. Based on the results from in situ monitoring and numerical simulations, which were consistent with each other, the original blasting design and corresponding parameters were adjusted to reduce the maximum BVV, which proved to be effective and safe. The effect of both the static stress before blasting vibrations and the dynamic stress induced by blasting on the total stresses in the existing tunnel lining is also discussed. The methods and related results presented could be applied in projects with similar ground and distance between old and new tunnels if the new tunnel is to be excavated by the full-section blasting method.

  18. First experiences with an accelerated CMV antigenemia test: CMV Brite Turbo assay.

    PubMed

    Visser, C E; van Zeijl, C J; de Klerk, E P; Schillizi, B M; Beersma, M F; Kroes, A C

    2000-06-01

    Cytomegalovirus disease is still a major problem in immunocompromised patients, such as bone marrow or kidney transplantation patients. The detection of viral antigen in leukocytes (antigenemia) has proven to be a clinically relevant marker of CMV activity and has found widespread application. Because most existing assays are rather time-consuming and laborious, an accelerated version (Brite Turbo) of an existing method (Brite) has been developed. The major modification is in the direct lysis of erythrocytes instead of separation by sedimentation. In this study the Brite Turbo method has been compared with the conventional Brite method to detect CMV antigen pp65 in peripheral blood leukocytes of 107 consecutive immunocompromised patients. Both tests produced similar results. Discrepancies were limited to the lowest positive range and sensitivity and specificity were comparable for both tests. Two major advantages of the Brite Turbo method could be observed in comparison to the original method: assay-time was reduced by more than 50% and only 2 ml of blood was required. An additional advantage was the higher number of positive nuclei in the Brite Turbo method attributable to the increased number of granulocytes in the assay. Early detection of CMV infection or reactivation has become faster and easier with this modified assay.

  19. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  20. A dynamic programming approach to estimate the capacity value of energy storage

    DOE PAGES

    Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul

    2013-09-17

    Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less

  1. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  2. Hybrid recommendation methods in complex networks.

    PubMed

    Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N

    2015-07-01

    We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.

  3. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    PubMed

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hao; Zhang, Guifu; Zhao, Kun

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  5. Lactase persistence genotyping on whole blood by loop-mediated isothermal amplification and melting curve analysis.

    PubMed

    Abildgaard, Anders; Tovbjerg, Sara K; Giltay, Axel; Detemmerman, Liselot; Nissen, Peter H

    2018-03-26

    The lactase persistence phenotype is controlled by a regulatory enhancer region upstream of the Lactase (LCT) gene. In northern Europe, specifically the -13910C > T variant has been associated with lactase persistence whereas other persistence variants, e.g. -13907C > G and -13915 T > G, have been identified in Africa and the Middle East. The aim of the present study was to compare a previously developed high resolution melting assay (HRM) with a novel method based on loop-mediated isothermal amplification and melting curve analysis (LAMP-MC) with both whole blood and DNA as input material. To evaluate the LAMP-MC method, we used 100 whole blood samples and 93 DNA samples in a two tiered study. First, we studied the ability of the LAMP-MC method to produce specific melting curves for several variants of the LCT enhancer region. Next, we performed a blinded comparison between the LAMP-MC method and our existing HRM method with clinical samples of unknown genotype. The LAMP-MC method produced specific melting curves for the variants at position -13909, -13910, -13913 whereas the -13907C > G and -13915 T > G variants produced indistinguishable melting profiles. The LAMP-MC assay is a simple method for lactase persistence genotyping and compares well with our existing HRM method. Copyright © 2018. Published by Elsevier B.V.

  6. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  7. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  8. Robust volcano plot: identification of differential metabolites in the presence of outliers.

    PubMed

    Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro

    2018-04-11

    The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .

  9. Quantification of Peptides from Immunoglobulin Constant and Variable Regions by Liquid Chromatography-Multiple Reaction Monitoring Mass Spectrometry for Assessment of Multiple Myeloma Patients

    PubMed Central

    Remily-Wood, Elizabeth R.; Benson, Kaaron; Baz, Rachid C.; Chen, Y. Ann; Hussein, Mohamad; Hartley-Brown, Monique A.; Sprung, Robert W.; Perez, Brianna; Liu, Richard Z.; Yoder, Sean; Teer, Jamie; Eschrich, Steven A.; Koomen, John M.

    2014-01-01

    Purpose Quantitative mass spectrometry assays for immunoglobulins (Igs) are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, e.g. multiple myeloma. Experimental design Using LC-MS/MS data, Ig constant region peptides and transitions were selected for liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM). Quantitative assays were used to assess Igs in serum from 83 patients. Results LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1–4, IgA1–2, IgM, IgD, and IgE, as well as kappa(κ) and lambda(λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 multiple myeloma cell line and two MM patients. Conclusions and Clinical Relevance LC-MRM assays targeting constant region peptides determine the type and isoform of the involved immunoglobulin and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher interassay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. PMID:24723328

  10. Quantification of peptides from immunoglobulin constant and variable regions by LC-MRM MS for assessment of multiple myeloma patients.

    PubMed

    Remily-Wood, Elizabeth R; Benson, Kaaron; Baz, Rachid C; Chen, Y Ann; Hussein, Mohamad; Hartley-Brown, Monique A; Sprung, Robert W; Perez, Brianna; Liu, Richard Z; Yoder, Sean J; Teer, Jamie K; Eschrich, Steven A; Koomen, John M

    2014-10-01

    Quantitative MS assays for Igs are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, for example, multiple myeloma (MM). Using LC-MS/MS data, Ig constant region peptides, and transitions were selected for LC-MRM MS. Quantitative assays were used to assess Igs in serum from 83 patients. RNA sequencing and peptide-based LC-MRM are used to define peptides for quantification of the disease-specific Ig. LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1-4, IgA1-2, IgM, IgD, and IgE, as well as kappa (κ) and lambda (λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 MM cell line and two MM patients. LC-MRM assays targeting constant region peptides determine the type and isoform of the involved Ig and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher inter-assay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Review Article "Valuating the intangible effects of natural hazards - review and analysis of the costing methods"

    NASA Astrophysics Data System (ADS)

    Markantonis, V.; Meyer, V.; Schwarze, R.

    2012-05-01

    The "intangible" or "non-market" effects are those costs of natural hazards which are not, or at least not easily measurable in monetary terms, as for example, impacts on health, cultural heritage or the environment. The intangible effects are often not included in costs assessments of natural hazards leading to an incomplete and biased cost assessment. However, several methods exist which try to estimate these effects in a non-monetary or monetary form. The objective of the present paper is to review and evaluate methods for estimating the intangible effects of natural hazards, specifically related to health and environmental effects. Existing methods are analyzed and compared using various criteria, research gaps are identified, application recommendations are provided, and valuation issues that should be addressed by the scientific community are highlighted.

  12. Fused methods for visual saliency estimation

    NASA Astrophysics Data System (ADS)

    Danko, Amanda S.; Lyu, Siwei

    2015-02-01

    In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.

  13. Nonlinear PET parametric image reconstruction with MRI information using kernel method

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2017-03-01

    Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.

  14. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  15. Spatial Lattice Modulation for MIMO Systems

    NASA Astrophysics Data System (ADS)

    Choi, Jiwook; Nam, Yunseo; Lee, Namyoon

    2018-06-01

    This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.

  16. A multi-scale network method for two-phase flow in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces withinmore » each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.« less

  17. Walking on a User Similarity Network towards Personalized Recommendations

    PubMed Central

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance. PMID:25489942

  18. Design issues for grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ropp, Michael Eugene

    1998-08-01

    Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.

  19. A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine

    NASA Astrophysics Data System (ADS)

    Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu

    In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.

  20. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  1. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  2. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  3. Corrosion performance tests for reinforcing steel in concrete : test procedures.

    DOT National Transportation Integrated Search

    2009-09-01

    The existing test method to assess the corrosion performance of reinforcing steel embedded in concrete, mainly : ASTM G109, is labor intensive, time consuming, slow to provide comparative results, and often expensive. : However, corrosion of reinforc...

  4. Corrosion performance tests for reinforcing steel in concrete : technical report.

    DOT National Transportation Integrated Search

    2009-10-01

    The existing test method used to assess the corrosion performance of reinforcing steel embedded in : concrete, mainly ASTM G 109, is labor intensive, time consuming, slow to provide comparative results, : and can be expensive. However, with corrosion...

  5. Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach

    NASA Astrophysics Data System (ADS)

    Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.

    2018-04-01

    Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.

  6. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  7. Social Networks of Adults with an Intellectual Disability from South Asian and White Communities in the United Kingdom: A Comparison

    ERIC Educational Resources Information Center

    Bhardwaj, Anjali K.; Forrester-Jones, Rachel V. E.; Murphy, Glynis H.

    2018-01-01

    Background: Little research exists comparing the social networks of people with intellectual disability (ID) from South Asian and White backgrounds. This UK study reports on the barriers that South Asian people with intellectual disability face in relation to social inclusion compared to their White counterparts. Materials and methods: A…

  8. Motivation to Study Core French: Comparing Recent Immigrants and Canadian-Born Secondary School Students

    ERIC Educational Resources Information Center

    Mady, Callie J.

    2010-01-01

    As the number of Allophone students attending public schools in Canada continues to increase (Statistics Canada, 2008), it is clear that a need exists in English-dominant areas to purposefully address the integration of these students into core French. I report the findings of a mixed-method study that was conducted to assess and compare the…

  9. Measuring Teacher Classroom Management Skills: A Comparative Analysis of Distance Trained and Conventional Trained Teachers

    ERIC Educational Resources Information Center

    Henaku, Christina Bampo; Pobbi, Michael Asamani

    2017-01-01

    Many researchers and educationist remain skeptical about the effectiveness of distance learning program and have termed it as second to the conventional training method. This perception is largely due to several challenges which exist within the management of distance learning program across the country. The general aim of the study is compare the…

  10. GeneCount: genome-wide calculation of absolute tumor DNA copy numbers from array comparative genomic hybridization data

    PubMed Central

    Lyng, Heidi; Lando, Malin; Brøvig, Runar S; Svendsrud, Debbie H; Johansen, Morten; Galteland, Eivind; Brustugun, Odd T; Meza-Zepeda, Leonardo A; Myklebost, Ola; Kristensen, Gunnar B; Hovig, Eivind; Stokke, Trond

    2008-01-01

    Absolute tumor DNA copy numbers can currently be achieved only on a single gene basis by using fluorescence in situ hybridization (FISH). We present GeneCount, a method for genome-wide calculation of absolute copy numbers from clinical array comparative genomic hybridization data. The tumor cell fraction is reliably estimated in the model. Data consistent with FISH results are achieved. We demonstrate significant improvements over existing methods for exploring gene dosages and intratumor copy number heterogeneity in cancers. PMID:18500990

  11. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  12. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  13. BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.

    PubMed

    Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong

    2018-04-19

    ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.

  14. A reconsideration of negative ratings for network-based recommendation

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Ren, Liang; Lin, Wenbin

    2018-01-01

    Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.

  15. [Application and comparison of NDT-Bobath and Vojta methods in treatment of selected pathologies of the nervous system in children].

    PubMed

    Jóźwiak, Sergiusz; Podogrodzki, Jacek

    2010-01-01

    The paper compares effectiveness of NDT-Bobath and Vojta methods in the treatment of selected dysfunctions of the nervous system in children. It evaluates applicability of both methods in prenatal and perinatal injury of the central nervous system, myelomeningocele, Down syndrome and spasticity. The existing literature is supplemented by own clinical experience of the authors. The paper forms the opinion on the constant debates on the superiority of one method over another.

  16. Statistical Considerations Concerning Dissimilar Regulatory Requirements for Dissolution Similarity Assessment. The Example of Immediate-Release Dosage Forms.

    PubMed

    Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria

    2017-05-01

    When performing in vitro dissolution testing, especially in the area of biowaivers, it is necessary to follow regulatory guidelines to minimize the risk of an unsafe or ineffective product being approved. The present study examines model-independent and model-dependent methods of comparing dissolution profiles based on various compared and contrasted international guidelines. Dissolution profiles for immediate release solid oral dosage forms were generated. The test material comprised tablets containing several substances, with at least 85% of the labeled amount dissolved within 15 min, 20-30 min, or 45 min. Dissolution profile similarity can vary with regard to the following criteria: time point selection (including the last time point), coefficient of variation, and statistical method selection. Variation between regulatory guidance and statistical methods can raise methodological questions and result potentially in a different outcome when reporting dissolution profile testing. The harmonization of existing guidelines would address existing problems concerning the interpretation of regulatory recommendations and research findings. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Determination of N epsilon-(carboxymethyl)lysine in foods and related systems.

    PubMed

    Ames, Jennifer M

    2008-04-01

    The sensitive and specific determination of advanced glycation end products (AGEs) is of considerable interest because these compounds have been associated with pro-oxidative and proinflammatory effects in vivo. AGEs form when carbonyl compounds, such as glucose and its oxidation products, glyoxal and methylglyoxal, react with the epsilon-amino group of lysine and the guanidino group of arginine to give structures including N epsilon-(carboxymethyl)lysine (CML), N epsilon-(carboxyethyl)lysine, and hydroimidazolones. CML is frequently used as a marker for AGEs in general. It exists in both the free or peptide-bound forms. Analysis of CML involves its extraction from the food (including protein hydrolysis to release any peptide-bound adduct) and determination by immunochemical or instrumental means. Various factors must be considered at each step of the analysis. Extraction, hydrolysis, and sample clean-up are all less straight forward for food samples, compared to plasma and tissue. The immunochemical and instrumental methods all have their advantages and disadvantages, and no perfect method exists. Currently, different procedures are being used in different laboratories, and there is an urgent need to compare, improve, and validate methods.

  18. CNV-TV: a robust method to discover copy number variation from short sequencing reads.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-05-02

    Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data. A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project. The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.

  19. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  20. Development of a press and drag method for hyperlink selection on smartphones.

    PubMed

    Chang, Joonho; Jung, Kihyo

    2017-11-01

    The present study developed a novel touch method for hyperlink selection on smartphones consisting of two sequential finger interactions: press and drag motions. The novel method requires a user to press a target hyperlink, and if a touch error occurs he/she can immediately correct the touch error by dragging the finger without releasing it in the middle. The method was compared with two existing methods in terms of completion time, error rate, and subjective rating. Forty college students participated in the experiments with different hyperlink sizes (4-pt, 6-pt, 8-pt, and 10-pt) on a touch-screen device. When hyperlink size was small (4-pt and 6-pt), the novel method (time: 826 msec; error: 0.6%) demonstrated better completion time and error rate than the current method (time: 1194 msec; error: 22%). In addition, the novel method (1.15, slightly satisfied, in 7-pt bipolar scale) had significantly higher satisfaction scores than the two existing methods (0.06, neutral). Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Reconstructing metastatic seeding patterns of human cancers

    PubMed Central

    Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.

    2017-01-01

    Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641

  2. Evaluation of techniques for increasing recall in a dictionary approach to gene and protein name identification.

    PubMed

    Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A

    2007-06-01

    Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.

  3. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  4. Generalized disequilibrium test for association in qualitative traits incorporating imprinting effects based on extended pedigrees.

    PubMed

    Li, Jian-Long; Wang, Peng; Fung, Wing Kam; Zhou, Ji-Yuan

    2017-10-16

    For dichotomous traits, the generalized disequilibrium test with the moment estimate of the variance (GDT-ME) is a powerful family-based association method. Genomic imprinting is an important epigenetic phenomenon and currently, there has been increasing interest of incorporating imprinting to improve the test power of association analysis. However, GDT-ME does not take imprinting effects into account, and it has not been investigated whether it can be used for association analysis when the effects indeed exist. In this article, based on a novel decomposition of the genotype score according to the paternal or maternal source of the allele, we propose the generalized disequilibrium test with imprinting (GDTI) for complete pedigrees without any missing genotypes. Then, we extend GDTI and GDT-ME to accommodate incomplete pedigrees with some pedigrees having missing genotypes, by using a Monte Carlo (MC) sampling and estimation scheme to infer missing genotypes given available genotypes in each pedigree, denoted by MCGDTI and MCGDT-ME, respectively. The proposed GDTI and MCGDTI methods evaluate the differences of the paternal as well as maternal allele scores for all discordant relative pairs in a pedigree, including beyond first-degree relative pairs. Advantages of the proposed GDTI and MCGDTI test statistics over existing methods are demonstrated by simulation studies under various simulation settings and by application to the rheumatoid arthritis dataset. Simulation results show that the proposed tests control the size well under the null hypothesis of no association, and outperform the existing methods under various imprinting effect models. The existing GDT-ME and the proposed MCGDT-ME can be used to test for association even when imprinting effects exist. For the application to the rheumatoid arthritis data, compared to the existing methods, MCGDTI identifies more loci statistically significantly associated with the disease. Under complete and incomplete imprinting effect models, our proposed GDTI and MCGDTI methods, by considering the information on imprinting effects and all discordant relative pairs within each pedigree, outperform all the existing test statistics and MCGDTI can recapture much of the missing information. Therefore, MCGDTI is recommended in practice.

  5. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  6. Straightening: existence, uniqueness and stability

    PubMed Central

    Destrade, M.; Ogden, R. W.; Sgura, I.; Vergori, L.

    2014-01-01

    One of the least studied universal deformations of incompressible nonlinear elasticity, namely the straightening of a sector of a circular cylinder into a rectangular block, is revisited here and, in particular, issues of existence and stability are addressed. Particular attention is paid to the system of forces required to sustain the large static deformation, including by the application of end couples. The influence of geometric parameters and constitutive models on the appearance of wrinkles on the compressed face of the block is also studied. Different numerical methods for solving the incremental stability problem are compared and it is found that the impedance matrix method, based on the resolution of a matrix Riccati differential equation, is the more precise. PMID:24711723

  7. A novel approach for targeted delivery to motoneurons using cholera toxin-B modified protocells

    PubMed Central

    Gonzalez Porras, Maria A.; Durfee, Paul N.; Gregory, Ashley M.; Sieck, Gary C.; Brinker, C. Jeffrey; Mantilla, Carlos B.

    2017-01-01

    Background Trophic interactions between muscle fibers and motoneurons at the neuromuscular junction (NMJ) play a critical role in determining motor function throughout development, ageing, injury, or disease. Treatment of neuromuscular disorders is hindered by the inability to selectively target motoneurons with pharmacological and genetic interventions. New method We describe a novel delivery system to motoneurons using mesoporous silica nanoparticles encapsulated within a lipid bilayer (protocells) and modified with the atoxic subunit B of the cholera toxin (CTB) that binds to gangliosides present on neuronal membranes. Results CTB modified protocells showed significantly greater motoneuron uptake compared to unmodified protocells after 24 h of treatment (60% vs. 15%, respectively). CTB-protocells showed specific uptake by motoneurons compared to muscle cells and demonstrated cargo release of a surrogate drug. Protocells showed a lack of cytotoxicity and unimpaired cellular proliferation. In isolated diaphragm muscle-phrenic nerve preparations, preferential axon terminal uptake of CTB-modified protocells was observed compared to uptake in surrounding muscle tissue. A larger proportion of axon terminals displayed uptake following treatment with CTB-protocells compared to unmodified protocells (40% vs. 6%, respectively). Comparison with existing method(s) Current motoneuron targeting strategies lack the functionality to load and deliver multiple cargos. CTB-protocells capitalizes on the advantages of liposomes and mesoporous silica nanoparticles allowing a large loading capacity and cargo release. The ability of CTB-protocells to target motoneurons at the NMJ confers a great advantage over existing methods. Conclusions CTB-protocells constitute a viable targeted motoneuron delivery system for drugs and genes facilitating various therapies for neuromuscular diseases. PMID:27641118

  8. Optical image encryption by random shifting in fractional Fourier domains

    NASA Astrophysics Data System (ADS)

    Hennelly, B.; Sheridan, J. T.

    2003-02-01

    A number of methods have recently been proposed in the literature for the encryption of two-dimensional information by use of optical systems based on the fractional Fourier transform. Typically, these methods require random phase screen keys for decrypting the data, which must be stored at the receiver and must be carefully aligned with the received encrypted data. A new technique based on a random shifting, or jigsaw, algorithm is proposed. This method does not require the use of phase keys. The image is encrypted by juxtaposition of sections of the image in fractional Fourier domains. The new method has been compared with existing methods and shows comparable or superior robustness to blind decryption. Optical implementation is discussed, and the sensitivity of the various encryption keys to blind decryption is examined.

  9. A New Stochastic Equivalent Linearization Implementation for Prediction of Geometrically Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.

    1999-01-01

    In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.

  10. Temporal variations in Global Seismic Stations ambient noise power levels

    USGS Publications Warehouse

    Ringler, A.T.; Gee, L.S.; Hutt, C.R.; McNamara, D.E.

    2010-01-01

    Recent concerns about time-dependent response changes in broadband seismometers have motivated the need for methods to monitor sensor health at Global Seismographic Network (GSN) stations. We present two new methods for monitoring temporal changes in data quality and instrument response transfer functions that are independent of Earth seismic velocity and attenuation models by comparing power levels against different baseline values. Our methods can resolve changes in both horizontal and vertical components in a broad range of periods (∼0.05 to 1,000 seconds) in near real time. In this report, we compare our methods with existing techniques and demonstrate how to resolve instrument response changes in long-period data (>100 seconds) as well as in the microseism bands (5 to 20 seconds).

  11. Prediction of arterial oxygen partial pressure after changes in FIO₂: validation and clinical application of a novel formula.

    PubMed

    Al-Otaibi, H M; Hardman, J G

    2011-11-01

    Existing methods allow prediction of Pa(O₂) during adjustment of Fi(O₂). However, these are cumbersome and lack sufficient accuracy for use in the clinical setting. The present studies aim to extend the validity of a novel formula designed to predict Pa(O₂) during adjustment of Fi(O₂) and to compare it with the current methods. Sixty-seven new data sets were collected from 46 randomly selected, mechanically ventilated patients. Each data set consisted of two subsets (before and 20 min after Fi(O₂) adjustment) and contained ventilator settings, pH, and arterial blood gas values. We compared the accuracy of Pa(O₂) prediction using a new formula (which utilizes only the pre-adjustment Pa(O₂) and pre- and post-adjustment Fi(O₂) with prediction using assumptions of constant Pa(O₂)/Fi(O₂) or constant Pa(O₂)/Pa(O₂). Subsequently, 20 clinicians predicted Pa(O₂) using the new formula and using Nunn's isoshunt diagram. The accuracy of the clinician's predictions was examined. The 95% limits of agreement (LA(95%)) between predicted and measured Pa(O₂) in the patient group were: new formula 0.11 (2.0) kPa, Pa(O₂)/Fi(O₂) -1.9 (4.4) kPa, and Pa(O₂)/Pa(O₂) -1.0 (3.6) kPa. The LA(95%) of clinicians' predictions of Pa(O₂) were 0.56 (3.6) kPa (new formula) and -2.7 (6.4) kPa (isoshunt diagram). The new formula's prediction of changes in Pa(O₂) is acceptably accurate and reliable and better than any other existing method. Its use by clinicians appears to improve accuracy over the most popular existing method. The simplicity of the new method may allow its regular use in the critical care setting.

  12. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  13. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  14. Communication: Analysing kinetic transition networks for rare events.

    PubMed

    Stevenson, Jacob D; Wales, David J

    2014-07-28

    The graph transformation approach is a recently proposed method for computing mean first passage times, rates, and committor probabilities for kinetic transition networks. Here we compare the performance to existing linear algebra methods, focusing on large, sparse networks. We show that graph transformation provides a much more robust framework, succeeding when numerical precision issues cause the other methods to fail completely. These are precisely the situations that correspond to rare event dynamics for which the graph transformation was introduced.

  15. VMT Mix Modeling for Mobile Source Emissions Forecasting: Formulation and Empirical Application

    DOT National Transportation Integrated Search

    2000-05-01

    The purpose of the current report is to propose and implement a methodology for obtaining improved link-specific vehicle miles of travel (VMT) mix values compared to those obtained from existent methods. Specifically, the research is developing a fra...

  16. Comparing microscopic activity-based and traditional models of travel demand : an Austin area case study

    DOT National Transportation Integrated Search

    2007-09-01

    Two competing approaches to travel demand modeling exist today. The more traditional 4-step travel demand models rely on aggregate demographic data at a traffic analysis zone (TAZ) level. Activity-based microsimulation methods employ more robus...

  17. A SCIENTIFIC AND TECHNOLOGICAL FRAMEWORK FOR ENVIRONMENTAL DECISION MAKING

    EPA Science Inventory

    There are significant scientific and technological challenges to managing natural resources. Data needs are cited as an obvious limitation, but there exist more fundamental scientific issues. What is still needed is a method of comparing management strategies based on projected i...

  18. REVIEW OF QUANTITATIVE STANDARDS AND GUIDELINES FOR FUNGI IN INDOOR AIR

    EPA Science Inventory

    Exposure to fungal aerosols clearly causes human disease. However, methods for assessing exposure remain poorly understood, and guidelines for interpreting data are often contradictory. The purposes of this paper are to review and compare existing guidelines for indoor airborne...

  19. Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆

    PubMed Central

    Tang, Liansheng; Du, Pang; Wu, Chengqing

    2012-01-01

    Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484

  20. Adaptive cockroach swarm algorithm

    NASA Astrophysics Data System (ADS)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  1. Wavelet-based image compression using shuffling and bit plane correlation

    NASA Astrophysics Data System (ADS)

    Kim, Seungjong; Jeong, Jechang

    2000-12-01

    In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.

  2. Florida Bay salinity and Everglades wetlands hydrology circa 1900 CE: A compilation of paleoecology-based statistical modeling analyses

    USGS Publications Warehouse

    Marshall, F.E.; Wingard, G.L.

    2012-01-01

    The upgraded method of coupled paleosalinity and hydrologic models was applied to the analysis of the circa-1900 CE segments of five estuarine sediment cores collected in Florida Bay. Comparisons of the observed mean stage (water level) data to the paleoecology-based model's averaged output show that the estimated stage in the Everglades wetlands was 0.3 to 1.6 feet higher at different locations. Observed mean flow data compared to the paleoecology-based model output show an estimated flow into Shark River Slough at Tamiami Trail of 401 to 2,539 cubic feet per second (cfs) higher than existing flows, and at Taylor Slough Bridge an estimated flow of 48 to 218 cfs above existing flows. For salinity in Florida Bay, the difference between paleoecology-based and observed mean salinity varies across the bay, from an aggregated average salinity of 14.7 less than existing in the northeastern basin to 1.0 less than existing in the western basin near the transition into the Gulf of Mexico. When the salinity differences are compared by region, the difference between paleoecology-based conditions and existing conditions are spatially consistent.

  3. Looking for trees in the forest: summary tree from posterior samples

    PubMed Central

    2013-01-01

    Background Bayesian phylogenetic analysis generates a set of trees which are often condensed into a single tree representing the whole set. Many methods exist for selecting a representative topology for a set of unrooted trees, few exist for assigning branch lengths to a fixed topology, and even fewer for simultaneously setting the topology and branch lengths. However, there is very little research into locating a good representative for a set of rooted time trees like the ones obtained from a BEAST analysis. Results We empirically compare new and known methods for generating a summary tree. Some new methods are motivated by mathematical constructions such as tree metrics, while the rest employ tree concepts which work well in practice. These use more of the posterior than existing methods, which discard information not directly mapped to the chosen topology. Using results from a large number of simulations we assess the quality of a summary tree, measuring (a) how well it explains the sequence data under the model and (b) how close it is to the “truth”, i.e to the tree used to generate the sequences. Conclusions Our simulations indicate that no single method is “best”. Methods producing good divergence time estimates have poor branch lengths and lower model fit, and vice versa. Using the results presented here, a user can choose the appropriate method based on the purpose of the summary tree. PMID:24093883

  4. Looking for trees in the forest: summary tree from posterior samples.

    PubMed

    Heled, Joseph; Bouckaert, Remco R

    2013-10-04

    Bayesian phylogenetic analysis generates a set of trees which are often condensed into a single tree representing the whole set. Many methods exist for selecting a representative topology for a set of unrooted trees, few exist for assigning branch lengths to a fixed topology, and even fewer for simultaneously setting the topology and branch lengths. However, there is very little research into locating a good representative for a set of rooted time trees like the ones obtained from a BEAST analysis. We empirically compare new and known methods for generating a summary tree. Some new methods are motivated by mathematical constructions such as tree metrics, while the rest employ tree concepts which work well in practice. These use more of the posterior than existing methods, which discard information not directly mapped to the chosen topology. Using results from a large number of simulations we assess the quality of a summary tree, measuring (a) how well it explains the sequence data under the model and (b) how close it is to the "truth", i.e to the tree used to generate the sequences. Our simulations indicate that no single method is "best". Methods producing good divergence time estimates have poor branch lengths and lower model fit, and vice versa. Using the results presented here, a user can choose the appropriate method based on the purpose of the summary tree.

  5. Determining Semantically Related Significant Genes.

    PubMed

    Taha, Kamal

    2014-01-01

    GO relation embodies some aspects of existence dependency. If GO term xis existence-dependent on GO term y, the presence of y implies the presence of x. Therefore, the genes annotated with the function of the GO term y are usually functionally and semantically related to the genes annotated with the function of the GO term x. A large number of gene set enrichment analysis methods have been developed in recent years for analyzing gene sets enrichment. However, most of these methods overlook the structural dependencies between GO terms in GO graph by not considering the concept of existence dependency. We propose in this paper a biological search engine called RSGSearch that identifies enriched sets of genes annotated with different functions using the concept of existence dependency. We observe that GO term xcannot be existence-dependent on GO term y, if x- and y- have the same specificity (biological characteristics). After encoding into a numeric format the contributions of GO terms annotating target genes to the semantics of their lowest common ancestors (LCAs), RSGSearch uses microarray experiment to identify the most significant LCA that annotates the result genes. We evaluated RSGSearch experimentally and compared it with five gene set enrichment systems. Results showed marked improvement.

  6. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  7. An improved method to represent DEM uncertainty in glacial lake outburst flood propagation using stochastic simulations

    NASA Astrophysics Data System (ADS)

    Watson, Cameron S.; Carrivick, Jonathan; Quincey, Duncan

    2015-10-01

    Modelling glacial lake outburst floods (GLOFs) or 'jökulhlaups', necessarily involves the propagation of large and often stochastic uncertainties throughout the source to impact process chain. Since flood routing is primarily a function of underlying topography, communication of digital elevation model (DEM) uncertainty should accompany such modelling efforts. Here, a new stochastic first-pass assessment technique was evaluated against an existing GIS-based model and an existing 1D hydrodynamic model, using three DEMs with different spatial resolution. The analysis revealed the effect of DEM uncertainty and model choice on several flood parameters and on the prediction of socio-economic impacts. Our new model, which we call MC-LCP (Monte Carlo Least Cost Path) and which is distributed in the supplementary information, demonstrated enhanced 'stability' when compared to the two existing methods, and this 'stability' was independent of DEM choice. The MC-LCP model outputs an uncertainty continuum within its extent, from which relative socio-economic risk can be evaluated. In a comparison of all DEM and model combinations, the Shuttle Radar Topography Mission (SRTM) DEM exhibited fewer artefacts compared to those with the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM), and were comparable to those with a finer resolution Advanced Land Observing Satellite Panchromatic Remote-sensing Instrument for Stereo Mapping (ALOS PRISM) derived DEM. Overall, we contend that the variability we find between flood routing model results suggests that consideration of DEM uncertainty and pre-processing methods is important when assessing flow routing and when evaluating potential socio-economic implications of a GLOF event. Incorporation of a stochastic variable provides an illustration of uncertainty that is important when modelling and communicating assessments of an inherently complex process.

  8. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  9. Reducing charging effects in scanning electron microscope images by Rayleigh contrast stretching method (RCS).

    PubMed

    Wan Ismail, W Z; Sim, K S; Tso, C P; Ting, H Y

    2011-01-01

    To reduce undesirable charging effects in scanning electron microscope images, Rayleigh contrast stretching is developed and employed. First, re-scaling is performed on the input image histograms with Rayleigh algorithm. Then, contrast stretching or contrast adjustment is implemented to improve the images while reducing the contrast charging artifacts. This technique has been compared to some existing histogram equalization (HE) extension techniques: recursive sub-image HE, contrast stretching dynamic HE, multipeak HE and recursive mean separate HE. Other post processing methods, such as wavelet approach, spatial filtering, and exponential contrast stretching, are compared as well. Overall, the proposed method produces better image compensation in reducing charging artifacts. Copyright © 2011 Wiley Periodicals, Inc.

  10. Tomography and generative training with quantum Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Kieferová, Mária; Wiebe, Nathan

    2017-12-01

    The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.

  11. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  12. Advancing methods for research on household water insecurity: Studying entitlements and capabilities, socio-cultural dynamics, and political processes, institutions and governance.

    PubMed

    Wutich, Amber; Budds, Jessica; Eichelberger, Laura; Geere, Jo; Harris, Leila; Horney, Jennifer; Jepson, Wendy; Norman, Emma; O'Reilly, Kathleen; Pearson, Amber; Shah, Sameer; Shinn, Jamie; Simpson, Karen; Staddon, Chad; Stoler, Justin; Teodoro, Manuel P; Young, Sera

    2017-11-01

    Household water insecurity has serious implications for the health, livelihoods and wellbeing of people around the world. Existing methods to assess the state of household water insecurity focus largely on water quality, quantity or adequacy, source or reliability, and affordability. These methods have significant advantages in terms of their simplicity and comparability, but are widely recognized to oversimplify and underestimate the global burden of household water insecurity. In contrast, a broader definition of household water insecurity should include entitlements and human capabilities, sociocultural dynamics, and political institutions and processes. This paper proposes a mix of qualitative and quantitative methods that can be widely adopted across cultural, geographic, and demographic contexts to assess hard-to-measure dimensions of household water insecurity. In doing so, it critically evaluates existing methods for assessing household water insecurity and suggests ways in which methodological innovations advance a broader definition of household water insecurity.

  13. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    NASA Astrophysics Data System (ADS)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  14. DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.

    PubMed

    Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei

    2018-01-01

    Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. Cell-fusion method to visualize interphase nuclear pore formation.

    PubMed

    Maeshima, Kazuhiro; Funakoshi, Tomoko; Imamoto, Naoko

    2014-01-01

    In eukaryotic cells, the nucleus is a complex and sophisticated organelle that organizes genomic DNA to support essential cellular functions. The nuclear surface contains many nuclear pore complexes (NPCs), channels for macromolecular transport between the cytoplasm and nucleus. It is well known that the number of NPCs almost doubles during interphase in cycling cells. However, the mechanism of NPC formation is poorly understood, presumably because a practical system for analysis does not exist. The most difficult obstacle in the visualization of interphase NPC formation is that NPCs already exist after nuclear envelope formation, and these existing NPCs interfere with the observation of nascent NPCs. To overcome this obstacle, we developed a novel system using the cell-fusion technique (heterokaryon method), previously also used to analyze the shuttling of macromolecules between the cytoplasm and the nucleus, to visualize the newly synthesized interphase NPCs. In addition, we used a photobleaching approach that validated the cell-fusion method. We recently used these methods to demonstrate the role of cyclin-dependent protein kinases and of Pom121 in interphase NPC formation in cycling human cells. Here, we describe the details of the cell-fusion approach and compare the system with other NPC formation visualization methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.

    PubMed

    Kim, Junghi; Bai, Yun; Pan, Wei

    2015-12-01

    We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods. © 2015 WILEY PERIODICALS, INC.

  17. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.

    PubMed

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-05-05

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. The Aggregate Exposure Pathway (AEP): A conceptual framework for advancing exposure science research and applications

    EPA Science Inventory

    Historically, risk assessment has relied upon toxicological data to obtain hazard-based reference levels, which are subsequently compared to exposure estimates to determine whether an unacceptable risk to public health may exist. Recent advances in analytical methods, biomarker ...

  19. Efficient design of CMOS TSC checkers

    NASA Technical Reports Server (NTRS)

    Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling

    1990-01-01

    This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.

  20. Application of multivariable search techniques to the optimization of airfoils in a low speed nonlinear inviscid flow field

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.

  1. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less

  2. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  3. Delving into α-stable distribution in noise suppression for seizure detection from scalp EEG

    NASA Astrophysics Data System (ADS)

    Wang, Yueming; Qi, Yu; Wang, Yiwen; Lei, Zhen; Zheng, Xiaoxiang; Pan, Gang

    2016-10-01

    Objective. There is serious noise in EEG caused by eye blink and muscle activities. The noise exhibits similar morphologies to epileptic seizure signals, leading to relatively high false alarms in most existing seizure detection methods. The objective in this paper is to develop an effective noise suppression method in seizure detection and explore the reason why it works. Approach. Based on a state-space model containing a non-linear observation function and multiple features as the observations, this paper delves deeply into the effect of the α-stable distribution in the noise suppression for seizure detection from scalp EEG. Compared with the Gaussian distribution, the α-stable distribution is asymmetric and has relatively heavy tails. These properties make it more powerful in modeling impulsive noise in EEG, which usually can not be handled by the Gaussian distribution. Specially, we give a detailed analysis in the state estimation process to show the reason why the α-stable distribution can suppress the impulsive noise. Main results. To justify each component in our model, we compare our method with 4 different models with different settings on a collected 331-hour epileptic EEG data. To show the superiority of our method, we compare it with the existing approaches on both our 331-hour data and 892-hour public data. The results demonstrate that our method is most effective in both the detection rate and the false alarm. Significance. This is the first attempt to incorporate the α-stable distribution to a state-space model for noise suppression in seizure detection and achieves the state-of-the-art performance.

  4. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  5. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  6. Simulating changes to emergency care resources to compare system effectiveness.

    PubMed

    Branas, Charles C; Wolff, Catherine S; Williams, Justin; Margolis, Gregg; Carr, Brendan G

    2013-08-01

    To apply systems optimization methods to simulate and compare the most effective locations for emergency care resources as measured by access to care. This study was an optimization analysis of the locations of trauma centers (TCs), helicopter depots (HDs), and severely injured patients in need of time-critical care in select US states. Access was defined as the percentage of injured patients who could reach a level I/II TC within 45 or 60 minutes. Optimal locations were determined by a search algorithm that considered all candidate sites within a set of existing hospitals and airports in finding the best solutions that maximized access. Across a dozen states, existing access to TCs within 60 minutes ranged from 31.1% to 95.6%, with a mean of 71.5%. Access increased from 0.8% to 35.0% after optimal addition of one or two TCs. Access increased from 1.0% to 15.3% after optimal addition of one or two HDs. Relocation of TCs and HDs (optimal removal followed by optimal addition) produced similar results. Optimal changes to TCs produced greater increases in access to care than optimal changes to HDs although these results varied across states. Systems optimization methods can be used to compare the impacts of different resource configurations and their possible effects on access to care. These methods to determine optimal resource allocation can be applied to many domains, including comparative effectiveness and patient-centered outcomes research. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Empirical Bayes method for reducing false discovery rates of correlation matrices with block diagonal structure.

    PubMed

    Pacini, Clare; Ajioka, James W; Micklem, Gos

    2017-04-12

    Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.

  8. Identifying self-interstitials of bcc and fcc crystals in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Bukkuru, S.; Bhardwaj, U.; Warrier, M.; Rao, A. D. P.; Valsakumar, M. C.

    2017-02-01

    Identification of self-interstitials in molecular dynamics (MD) simulations is of critical importance. There exist several criteria for identifying the self-interstitial. Most of the existing methods use an assumed cut-off value for the displacement of an atom from its lattice position to identify the self-interstitial. The results obtained are affected by the chosen cut-off value. Moreover, these chosen cut-off values are independent of temperature. We have developed a novel unsupervised learning algorithm called Max-Space Clustering (MSC) to identify an appropriate cut-off value and its dependence on temperature. This method is compared with some widely used methods such as effective sphere (ES) method and nearest neighbor sphere (NNS) method. The cut-off radius obtained using our method shows a linear variation with temperature. The value of cut-off radius and its temperature dependence is derived for five bcc (Cr, Fe, Mo, Nb, W) and six fcc (Ag, Au, Cu, Ni, Pd, Pt) crystals. It is seen that the ratio of the cut-off values "r" to the lattice constant "a" lies between 0.23 and 0.3 at 300 K and this ratio is on an average smaller for the fcc crystals. Collision cascade simulations are carried out for Primary knock-on Atom (PKA) energies of 5 keV in Fe (at 300 K and 1000 K) and W (at 300 K and 2500 K) and the results are compared using the various methods.

  9. A new model-independent approach for finding the arrival direction of an extensive air shower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hedayati, H. Kh., E-mail: hedayati@kntu.ac.ir

    2016-11-01

    A new accurate method for reconstructing the arrival direction of an extensive air shower (EAS) is described. Compared to existing methods, it is not subject to minimization of a function and, therefore, is fast and stable. This method also does not need to know detailed curvature or thickness structure of an EAS. It can have angular resolution of about 1 degree for a typical surface array in central regions. Also, it has better angular resolution than other methods in the marginal area of arrays.

  10. Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2018-03-01

    Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.

  11. Optical Remote Sensing Method to Determine Strength of Non-point Sources

    DTIC Science & Technology

    2008-09-01

    site due to its location, which is convenient to both USEPA’s RTP campus and the ARCADIS-Durham office. The site also has appropriate NPSs to measure...campus and the ARCADIS-Durham office. The site also has appropriate NPSs to measure that are of interest to regulators. 3.2.4 Tinker Air Force Base...Existing methodology for measuring NPSs is not directly comparable to the proposed PI-ORS method because the new method provides higher quality and

  12. Scoring Methods for Building Genotypic Scores: An Application to Didanosine Resistance in a Large Derivation Set

    PubMed Central

    Houssaini, Allal; Assoumou, Lambert; Miller, Veronica; Calvez, Vincent; Marcelin, Anne-Geneviève; Flandre, Philippe

    2013-01-01

    Background Several attempts have been made to determine HIV-1 resistance from genotype resistance testing. We compare scoring methods for building weighted genotyping scores and commonly used systems to determine whether the virus of a HIV-infected patient is resistant. Methods and Principal Findings Three statistical methods (linear discriminant analysis, support vector machine and logistic regression) are used to determine the weight of mutations involved in HIV resistance. We compared these weighted scores with known interpretation systems (ANRS, REGA and Stanford HIV-db) to classify patients as resistant or not. Our methodology is illustrated on the Forum for Collaborative HIV Research didanosine database (N = 1453). The database was divided into four samples according to the country of enrolment (France, USA/Canada, Italy and Spain/UK/Switzerland). The total sample and the four country-based samples allow external validation (one sample is used to estimate a score and the other samples are used to validate it). We used the observed precision to compare the performance of newly derived scores with other interpretation systems. Our results show that newly derived scores performed better than or similar to existing interpretation systems, even with external validation sets. No difference was found between the three methods investigated. Our analysis identified four new mutations associated with didanosine resistance: D123S, Q207K, H208Y and K223Q. Conclusions We explored the potential of three statistical methods to construct weighted scores for didanosine resistance. Our proposed scores performed at least as well as already existing interpretation systems and previously unrecognized didanosine-resistance associated mutations were identified. This approach could be used for building scores of genotypic resistance to other antiretroviral drugs. PMID:23555613

  13. Quantification of polyhydroxyalkanoates in mixed and pure cultures biomass by Fourier transform infrared spectroscopy: comparison of different approaches.

    PubMed

    Isak, I; Patel, M; Riddell, M; West, M; Bowers, T; Wijeyekoon, S; Lloyd, J

    2016-08-01

    Fourier transform infrared (FTIR) spectroscopy was used in this study for the rapid quantification of polyhydroxyalkanoates (PHA) in mixed and pure culture bacterial biomass. Three different statistical analysis methods (regression, partial least squares (PLS) and nonlinear) were applied to the FTIR data and the results were plotted against the PHA values measured with the reference gas chromatography technique. All methods predicted PHA content in mixed culture biomass with comparable efficiency, indicated by similar residuals values. The PHA in these cultures ranged from low to medium concentration (0-44 wt% of dried biomass content). However, for the analysis of the combined mixed and pure culture biomass with PHA concentration ranging from low to high (0-93% of dried biomass content), the PLS method was most efficient. This paper reports, for the first time, the use of a single calibration model constructed with a combination of mixed and pure cultures covering a wide PHA range, for predicting PHA content in biomass. Currently no one universal method exists for processing FTIR data for polyhydroxyalkanoates (PHA) quantification. This study compares three different methods of analysing FTIR data for quantification of PHAs in biomass. A new data-processing approach was proposed and the results were compared against existing literature methods. Most publications report PHA quantification of medium range in pure culture. However, in our study we encompassed both mixed and pure culture biomass containing a broader range of PHA in the calibration curve. The resulting prediction model is useful for rapid quantification of a wider range of PHA content in biomass. © 2016 The Society for Applied Microbiology.

  14. Analyzing medical costs with time-dependent treatment: The nested g-formula.

    PubMed

    Spieker, Andrew; Roy, Jason; Mitra, Nandita

    2018-04-16

    As medical expenses continue to rise, methods to properly analyze cost outcomes are becoming of increasing relevance when seeking to compare average costs across treatments. Inverse probability weighted regression models have been developed to address the challenge of cost censoring in order to identify intent-to-treat effects (i.e., to compare mean costs between groups on the basis of their initial treatment assignment, irrespective of any subsequent changes to their treatment status). In this paper, we describe a nested g-computation procedure that can be used to compare mean costs between two or more time-varying treatment regimes. We highlight the relative advantages and limitations of this approach when compared with existing regression-based models. We illustrate the utility of this approach as a means to inform public policy by applying it to a simulated data example motivated by costs associated with cancer treatments. Simulations confirm that inference regarding intent-to-treat effects versus the joint causal effects estimated by the nested g-formula can lead to markedly different conclusions regarding differential costs. Therefore, it is essential to prespecify the desired target of inference when choosing between these two frameworks. The nested g-formula should be considered as a useful, complementary tool to existing methods when analyzing cost outcomes. Copyright © 2018 John Wiley & Sons, Ltd.

  15. A Comparative Approach for Ranking Contaminated Sites Based on the Risk Assessment Paradigm Using Fuzzy PROMETHEE

    NASA Astrophysics Data System (ADS)

    Zhang, Kejiang; Kluck, Cheryl; Achari, Gopal

    2009-11-01

    A ranking system for contaminated sites based on comparative risk methodology using fuzzy Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) was developed in this article. It combines the concepts of fuzzy sets to represent uncertain site information with the PROMETHEE, a subgroup of Multi-Criteria Decision Making (MCDM) methods. Criteria are identified based on a combination of the attributes (toxicity, exposure, and receptors) associated with the potential human health and ecological risks posed by contaminated sites, chemical properties, site geology and hydrogeology and contaminant transport phenomena. Original site data are directly used avoiding the subjective assignment of scores to site attributes. When the input data are numeric and crisp the PROMETHEE method can be used. The Fuzzy PROMETHEE method is preferred when substantial uncertainties and subjectivities exist in site information. The PROMETHEE and fuzzy PROMETHEE methods are both used in this research to compare the sites. The case study shows that this methodology provides reasonable results.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  17. A comparative approach for ranking contaminated sites based on the risk assessment paradigm using fuzzy PROMETHEE.

    PubMed

    Zhang, Kejiang; Kluck, Cheryl; Achari, Gopal

    2009-11-01

    A ranking system for contaminated sites based on comparative risk methodology using fuzzy Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) was developed in this article. It combines the concepts of fuzzy sets to represent uncertain site information with the PROMETHEE, a subgroup of Multi-Criteria Decision Making (MCDM) methods. Criteria are identified based on a combination of the attributes (toxicity, exposure, and receptors) associated with the potential human health and ecological risks posed by contaminated sites, chemical properties, site geology and hydrogeology and contaminant transport phenomena. Original site data are directly used avoiding the subjective assignment of scores to site attributes. When the input data are numeric and crisp the PROMETHEE method can be used. The Fuzzy PROMETHEE method is preferred when substantial uncertainties and subjectivities exist in site information. The PROMETHEE and fuzzy PROMETHEE methods are both used in this research to compare the sites. The case study shows that this methodology provides reasonable results.

  18. Shape-from-focus by tensor voting.

    PubMed

    Hariharan, R; Rajagopalan, A N

    2012-07-01

    In this correspondence, we address the task of recovering shape-from-focus (SFF) as a perceptual organization problem in 3-D. Using tensor voting, depth hypotheses from different focus operators are validated based on their likelihood to be part of a coherent 3-D surface, thereby exploiting scene geometry and focus information to generate reliable depth estimates. The proposed method is fast and yields significantly better results compared with existing SFF methods.

  19. Simultaneous Nonrigid Registration, Segmentation, and Tumor Detection in MRI Guided Cervical Cancer Radiation Therapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178

  20. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  1. Gaussian mixture model based identification of arterial wall movement for computation of distension waveform.

    PubMed

    Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram

    2015-01-01

    This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

  2. Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.

    PubMed

    Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante

    2014-10-01

    In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.

  3. The Rapid Benefit Indicators (RBI) Approach: A Process for Assessing the Social Benefits of Ecological Restoration

    EPA Science Inventory

    Environmental managers face difficult decisions about allocating resources to the most beneficial projects. Focusing solely on ecological outcomes can lead to missed opportunities to provide social benefits, yet few methods exist to easily compare the social benefits of ecologica...

  4. A diagram retrieval method with multi-label learning

    NASA Astrophysics Data System (ADS)

    Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi

    2015-01-01

    In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.

  5. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  6. QQ-SNV: single nucleotide variant detection at low frequency by comparing the quality quantiles.

    PubMed

    Van der Borght, Koen; Thys, Kim; Wetzels, Yves; Clement, Lieven; Verbist, Bie; Reumers, Joke; van Vlijmen, Herman; Aerssens, Jeroen

    2015-11-10

    Next generation sequencing enables studying heterogeneous populations of viral infections. When the sequencing is done at high coverage depth ("deep sequencing"), low frequency variants can be detected. Here we present QQ-SNV (http://sourceforge.net/projects/qqsnv), a logistic regression classifier model developed for the Illumina sequencing platforms that uses the quantiles of the quality scores, to distinguish true single nucleotide variants from sequencing errors based on the estimated SNV probability. To train the model, we created a dataset of an in silico mixture of five HIV-1 plasmids. Testing of our method in comparison to the existing methods LoFreq, ShoRAH, and V-Phaser 2 was performed on two HIV and four HCV plasmid mixture datasets and one influenza H1N1 clinical dataset. For default application of QQ-SNV, variants were called using a SNV probability cutoff of 0.5 (QQ-SNV(D)). To improve the sensitivity we used a SNV probability cutoff of 0.0001 (QQ-SNV(HS)). To also increase specificity, SNVs called were overruled when their frequency was below the 80(th) percentile calculated on the distribution of error frequencies (QQ-SNV(HS-P80)). When comparing QQ-SNV versus the other methods on the plasmid mixture test sets, QQ-SNV(D) performed similarly to the existing approaches. QQ-SNV(HS) was more sensitive on all test sets but with more false positives. QQ-SNV(HS-P80) was found to be the most accurate method over all test sets by balancing sensitivity and specificity. When applied to a paired-end HCV sequencing study, with lowest spiked-in true frequency of 0.5%, QQ-SNV(HS-P80) revealed a sensitivity of 100% (vs. 40-60% for the existing methods) and a specificity of 100% (vs. 98.0-99.7% for the existing methods). In addition, QQ-SNV required the least overall computation time to process the test sets. Finally, when testing on a clinical sample, four putative true variants with frequency below 0.5% were consistently detected by QQ-SNV(HS-P80) from different generations of Illumina sequencers. We developed and successfully evaluated a novel method, called QQ-SNV, for highly efficient single nucleotide variant calling on Illumina deep sequencing virology data.

  7. Robust rotational-velocity-Verlet integration methods.

    PubMed

    Rozmanov, Dmitri; Kusalik, Peter G

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  8. Robust rotational-velocity-Verlet integration methods

    NASA Astrophysics Data System (ADS)

    Rozmanov, Dmitri; Kusalik, Peter G.

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  9. A method of power analysis based on piecewise discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Xin, Miaomiao; Zhang, Yanchi; Xie, Da

    2018-04-01

    The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.

  10. Analysis of Bonded Joints Between the Facesheet and Flange of Corrugated Composite Panels

    NASA Technical Reports Server (NTRS)

    Yarrington, Phillip W.; Collier, Craig S.; Bednarcyk, Brett A.

    2008-01-01

    This paper outlines a method for the stress analysis of bonded composite corrugated panel facesheet to flange joints. The method relies on the existing HyperSizer Joints software, which analyzes the bonded joint, along with a beam analogy model that provides the necessary boundary loading conditions to the joint analysis. The method is capable of predicting the full multiaxial stress and strain fields within the flange to facesheet joint and thus can determine ply-level margins and evaluate delamination. Results comparing the method to NASTRAN finite element model stress fields are provided illustrating the accuracy of the method.

  11. High-dose-rate prostate brachytherapy inverse planning on dose-volume criteria by simulated annealing.

    PubMed

    Deist, T M; Gorissen, B L

    2016-02-07

    High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.

  12. Application of multi-agent coordination methods to the design of space debris mitigation tours

    NASA Astrophysics Data System (ADS)

    Stuart, Jeffrey; Howell, Kathleen; Wilson, Roby

    2016-04-01

    The growth in the number of defunct and fragmented objects near to the Earth poses a growing hazard to launch operations as well as existing on-orbit assets. Numerous studies have demonstrated the positive impact of active debris mitigation campaigns upon the growth of debris populations, but comparatively fewer investigations incorporate specific mission scenarios. Furthermore, while many active mitigation methods have been proposed, certain classes of debris objects are amenable to mitigation campaigns employing chaser spacecraft with existing chemical and low-thrust propulsive technologies. This investigation incorporates an ant colony optimization routing algorithm and multi-agent coordination via auctions into a debris mitigation tour scheme suitable for preliminary mission design and analysis as well as spacecraft flight operations.

  13. Velocity profile, water-surface slope, and bed-material size for selected streams in Colorado

    USGS Publications Warehouse

    Marchand, J.P.; Jarrett, R.D.; Jones, L.L.

    1984-01-01

    Existing methods for determining the mean velocity in a vertical sampling section do not address the conditions present in high-gradient, shallow-depth streams common to mountainous regions such as Colorado. The report presents velocity-profile data that were collected for 11 streamflow-gaging stations in Colorado using both a standard Price type AA current meter and a prototype Price Model PAA current meter. Computational results are compiled that will enable mean velocities calculated from measurements by the two current meters to be compared with each other and with existing methods for determining mean velocity. Water-surface slope, bed-material size, and flow-characteristic data for the 11 sites studied also are presented. (USGS)

  14. Guided SAR image despeckling with probabilistic non local weights

    NASA Astrophysics Data System (ADS)

    Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny

    2017-12-01

    SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.

  15. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  16. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  17. High-Order Automatic Differentiation of Unmodified Linear Algebra Routines via Nilpotent Matrices

    NASA Astrophysics Data System (ADS)

    Dunham, Benjamin Z.

    This work presents a new automatic differentiation method, Nilpotent Matrix Differentiation (NMD), capable of propagating any order of mixed or univariate derivative through common linear algebra functions--most notably third-party sparse solvers and decomposition routines, in addition to basic matrix arithmetic operations and power series--without changing data-type or modifying code line by line; this allows differentiation across sequences of arbitrarily many such functions with minimal implementation effort. NMD works by enlarging the matrices and vectors passed to the routines, replacing each original scalar with a matrix block augmented by derivative data; these blocks are constructed with special sparsity structures, termed "stencils," each designed to be isomorphic to a particular multidimensional hypercomplex algebra. The algebras are in turn designed such that Taylor expansions of hypercomplex function evaluations are finite in length and thus exactly track derivatives without approximation error. Although this use of the method in the "forward mode" is unique in its own right, it is also possible to apply it to existing implementations of the (first-order) discrete adjoint method to find high-order derivatives with lowered cost complexity; for example, for a problem with N inputs and an adjoint solver whose cost is independent of N--i.e., O(1)--the N x N Hessian can be found in O(N) time, which is comparable to existing second-order adjoint methods that require far more problem-specific implementation effort. Higher derivatives are likewise less expensive--e.g., a N x N x N rank-three tensor can be found in O(N2). Alternatively, a Hessian-vector product can be found in O(1) time, which may open up many matrix-based simulations to a range of existing optimization or surrogate modeling approaches. As a final corollary in parallel to the NMD-adjoint hybrid method, the existing complex-step differentiation (CD) technique is also shown to be capable of finding the Hessian-vector product. All variants are implemented on a stochastic diffusion problem and compared in-depth with various cost and accuracy metrics.

  18. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  19. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    NASA Astrophysics Data System (ADS)

    Mallast, U.; Gloaguen, R.; Geyer, S.; Rödiger, T.; Siebert, C.

    2011-08-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  20. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Analyzing partially missing confounder information in comparative effectiveness and safety research of therapeutics.

    PubMed

    Toh, Sengwee; García Rodríguez, Luis A; Hernán, Miguel A

    2012-05-01

    Electronic healthcare databases are commonly used in comparative effectiveness and safety research of therapeutics. Many databases now include additional confounder information in a subset of the study population through data linkage or data collection. We described and compared existing methods for analyzing such datasets. Using data from The Health Improvement Network and the relation between non-steroidal anti-inflammatory drugs and upper gastrointestinal bleeding as an example, we employed several methods to handle partially missing confounder information. The crude odds ratio (OR) of upper gastrointestinal bleeding was 1.50 (95% confidence interval: 0.98, 2.28) among selective cyclo-oxygenase-2 inhibitor initiators (n = 43 569) compared with traditional non-steroidal anti-inflammatory drug initiators (n = 411 616). The OR dropped to 0.81 (0.52, 1.27) upon adjustment for confounders recorded for all patients. When further considering three additional variables missing in 22% of the study population (smoking, alcohol consumption, body mass index), the OR was between 0.80 and 0.83 for the missing-category approach, the missing-indicator approach, single imputation by the most common category, multiple imputation by chained equations, and propensity score calibration. The OR was 0.65 (0.39, 1.09) and 0.67 (0.38, 1.16) for the unweighted and the inverse probability weighted complete-case analysis, respectively. Existing methods for handling partially missing confounder data require different assumptions and may produce different results. The unweighted complete-case analysis, the missing-category/indicator approach, and single imputation require often unrealistic assumptions and should be avoided. In this study, differences across methods were not substantial, likely due to relatively low proportion of missingness and weak confounding effect by the three additional variables upon adjustment for other variables. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Power System Transient Diagnostics Based on Novel Traveling Wave Detection

    NASA Astrophysics Data System (ADS)

    Hamidi, Reza Jalilzadeh

    Modern electrical power systems demand novel diagnostic approaches to enhancing the system resiliency by improving the state-of-the-art algorithms. The proliferation of high-voltage optical transducers and high time-resolution measurements provide opportunities to develop novel diagnostic methods of very fast transients in power systems. At the same time, emerging complex configuration, such as multi-terminal hybrid transmission systems, limits the applications of the traditional diagnostic methods, especially in fault location and health monitoring. The impedance-based fault-location methods are inefficient for cross-bounded cables, which are widely used for connection of offshore wind farms to the main grid. Thus, this dissertation first presents a novel traveling wave-based fault-location method for hybrid multi-terminal transmission systems. The proposed method utilizes time-synchronized high-sampling voltage measurements. The traveling wave arrival times (ATs) are detected by observation of the squares of wavelet transformation coefficients. Using the ATs, an over-determined set of linear equations are developed for noise reduction, and consequently, the faulty segment is determined based on the characteristics of the provided equation set. Then, the fault location is estimated. The accuracy and capabilities of the proposed fault location method are evaluated and also compared to the existing traveling-wave-based method for a wide range of fault parameters. In order to improve power systems stability, auto-reclosing (AR), single-phase auto-reclosing (SPAR), and adaptive single-phase auto-reclosing (ASPAR) methods have been developed with the final objectives of distinguishing between the transient and permanent faults to clear the transient faults without de-energization of the solid phases. However, the features of the electrical arcs (transient faults) are severely influenced by a number of random parameters, including the convection of the air and plasma, wind speed, air pressure, and humidity. Therefore, the dead-time (the de-energization duration of the faulty phase) is unpredictable. Accordingly, conservatively long dead-times are usually considered by protection engineers. However, if the exact arc distinction time is determined, the power system stability and quality will enhance. Therefore, a new method for detection of arc extinction times leading to a new ASPAR method utilizing power line carrier (PLC) signals is presented. The efficiency of the proposed ASPAR method is verified through simulations and compared with the existing ASPAR methods. High-sampling measurements are prone to be skewed by the environmental noises and analog-to-digital (A/D) converters quantization errors. Therefore noise-contaminated measurements are the major source of uncertainties and errors in the outcomes of traveling wave-based diagnostic applications. The existing AT-detection methods do not provide enough sensitivity and selectivity at the same time. Therefore, a new AT-detection method based on short-time matrix pencil (STMPM) is developed to accurately detect ATs of the traveling waves with low signal-to-noise (SNR) ratios. As STMPM is based on matrix algebra, it is a challenging to implement this new technique in microprocessor-based fault locators. Hence, a fully recursive and computationally efficient method based on adaptive discrete Kalman filter (ADKF) is introduced for AT-detection, which is proper for microprocessors and able to accomplish accurate AT-detection for online applications such as ultra-high-speed protection. Both proposed AT-detection methods are evaluated based on extensive simulation studies, and the superior outcomes are compared to the existing methods.

  3. Comparison of maternal morbidity and medical costs during pregnancy and delivery between patients with gestational diabetes and patients with pre-existing diabetes

    PubMed Central

    Son, K H; Lim, N-K; Lee, J-W; Cho, M-C; Park, H-Y

    2015-01-01

    Aims To evaluate the effects of gestational diabetes and pre-existing diabetes on maternal morbidity and medical costs, using data from the Korea National Health Insurance Claims Database of the Health Insurance Review and Assessment Service. Methods Delivery cases in 2010, 2011 and 2012 (459 842, 442 225 and 380 431 deliveries) were extracted from the Health Insurance Review and Assessment Service database. The complications and medical costs were compared among the following three pregnancy groups: normal, gestational diabetes and pre-existing diabetes. Results Although, the rates of pre-existing diabetes did not fluctuate (2.5, 2.4 and 2.7%) throughout the study, the rate of gestational diabetes steadily increased (4.6, 6.2 and 8.0%). Furthermore, the rates of pre-existing diabetes and gestational diabetes increased in conjunction with maternal age, pre-existing hypertension and cases of multiple pregnancy. The risk of pregnancy-induced hypertension, urinary tract infections, premature delivery, liver disease and chronic renal disease were greater in the gestational diabetes and pre-existing diabetes groups than in the normal group. The risk of venous thromboembolism, antepartum haemorrhage, shoulder dystocia and placenta disorder were greater in the pre-existing diabetes group, but not the gestational diabetes group, compared with the normal group. The medical costs associated with delivery, the costs during pregnancy and the number of in-hospital days for the subjects in the pre-existing diabetes group were the highest among the three groups. Conclusions The study showed that the rates of pre-existing diabetes and gestational diabetes increased with maternal age at pregnancy and were associated with increases in medical costs and pregnancy-related complications. PMID:25472691

  4. Hand-eye calibration for rigid laparoscopes using an invariant point.

    PubMed

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  5. A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000

    2015-04-15

    Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less

  6. Computation and measurement of cell decision making errors using single cell data

    PubMed Central

    Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali

    2017-01-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950

  7. Computation and measurement of cell decision making errors using single cell data.

    PubMed

    Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali

    2017-04-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.

  8. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  9. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  10. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different methods on a fair basis. We developed and present pySTAN (python framework for STructure Analysis), a python package containing a set of functions for model structure evaluation to provide the analysis of (hydrological) model structures. A selected set of algorithms for optimization, uncertainty and sensitivity analysis is currently available, together with a set of evaluation (objective) functions and input distributions to sample from. The methods are implemented model-independent and the python language provides the wrapper functions to apply administer external model codes. Different objective functions can be considered simultaneously with both statistical metrics and more hydrology specific metrics. By using so-called reStructuredText (sphinx documentation generator) and Python documentation strings (docstrings), the generation of manual pages is semi-automated and a specific environment is available to enhance both the readability and transparency of the code. It thereby enables a larger group of users to apply and compare these methods and to extend the functionalities.

  11. Comparing deep learning and concept extraction based methods for patient phenotyping from clinical narratives.

    PubMed

    Gehrmann, Sebastian; Dernoncourt, Franck; Li, Yeran; Carlson, Eric T; Wu, Joy T; Welt, Jonathan; Foote, John; Moseley, Edward T; Grant, David W; Tyler, Patrick D; Celi, Leo A

    2018-01-01

    In secondary analysis of electronic health records, a crucial task consists in correctly identifying the patient cohort under investigation. In many cases, the most valuable and relevant information for an accurate classification of medical conditions exist only in clinical narratives. Therefore, it is necessary to use natural language processing (NLP) techniques to extract and evaluate these narratives. The most commonly used approach to this problem relies on extracting a number of clinician-defined medical concepts from text and using machine learning techniques to identify whether a particular patient has a certain condition. However, recent advances in deep learning and NLP enable models to learn a rich representation of (medical) language. Convolutional neural networks (CNN) for text classification can augment the existing techniques by leveraging the representation of language to learn which phrases in a text are relevant for a given medical condition. In this work, we compare concept extraction based methods with CNNs and other commonly used models in NLP in ten phenotyping tasks using 1,610 discharge summaries from the MIMIC-III database. We show that CNNs outperform concept extraction based methods in almost all of the tasks, with an improvement in F1-score of up to 26 and up to 7 percentage points in area under the ROC curve (AUC). We additionally assess the interpretability of both approaches by presenting and evaluating methods that calculate and extract the most salient phrases for a prediction. The results indicate that CNNs are a valid alternative to existing approaches in patient phenotyping and cohort identification, and should be further investigated. Moreover, the deep learning approach presented in this paper can be used to assist clinicians during chart review or support the extraction of billing codes from text by identifying and highlighting relevant phrases for various medical conditions.

  12. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  13. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  14. Signal-to-noise ratio estimation using adaptive tuning on the piecewise cubic Hermite interpolation model for images.

    PubMed

    Sim, K S; Yeap, Z X; Tso, C P

    2016-11-01

    An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  15. A novel finite element analysis of three-dimensional circular crack

    NASA Astrophysics Data System (ADS)

    Ping, X. C.; Wang, C. G.; Cheng, L. P.

    2018-06-01

    A novel singular element containing a part of the circular crack front is established to solve the singular stress fields of circular cracks by using the numerical series eigensolutions of singular stress fields. The element is derived from the Hellinger-Reissner variational principle and can be directly incorporated into existing 3D brick elements. The singular stress fields are determined as the system unknowns appearing as displacement nodal values. The numerical studies are conducted to demonstrate the simplicity of the proposed technique in handling fracture problems of circular cracks. The usage of the novel singular element can avoid mesh refinement near the crack front domain without loss of calculation accuracy and velocity of convergence. Compared with the conventional finite element methods and existing analytical methods, the present method is more suitable for dealing with complicated structures with a large number of elements.

  16. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  17. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.

    PubMed

    Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C

    2018-06-01

    Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.

  18. Selective Serotonin Reuptake Inhibitor-Induced Sexual Dysfunction in Adolescents: A Review.

    ERIC Educational Resources Information Center

    Scharko, Alexander M.

    2004-01-01

    Objective: To review the existing literature on selective serotonin reuptake inhibitor (SSRI)-induced sexual dysfunction in adolescents. Method: A literature review of SSRI-induced adverse effects in adolescents focusing on sexual dysfunction was done. Nonsexual SSRI-induced adverse effects were compared in adult and pediatric populations.…

  19. Estimating the Cost of Standardized Student Testing in the United States.

    ERIC Educational Resources Information Center

    Phelps, Richard P.

    2000-01-01

    Describes and contrasts different methods of estimating costs of standardized testing. Using a cost-accounting approach, compares gross and marginal costs and considers testing objects (test materials and services, personnel and student time, and administrative/building overhead). Social marginal costs of replacing existing tests with a national…

  20. Decision support systems for ecosystem management: An evaluation of existing systems

    Treesearch

    H. Todd Mowrer; Klaus Barber; Joe Campbell; Nick Crookston; Cathy Dahms; John Day; Jim Laacke; Jim Merzenich; Steve Mighton; Mike Rauscher; Rick Sojda; Joyce Thompson; Peter Trenchi; Mark Twery

    1997-01-01

    This report evaluated 24 computer-aided decision support systems (DSS) that can support management decision-making in forest ecosystems. It compares the scope of each system, spatial capabilities, computational methods, development status, input and output requirements, user support availability, and system performance. Questionnaire responses from the DSS developers (...

  1. Financing Lifelong Learning for All: An International Perspective. Working Paper.

    ERIC Educational Resources Information Center

    Burke, Gerald

    Recent international discussions provide information on various countries' responses to lifelong learning, including the following: (1) existing unmet needs and emerging needs for education and training; (2) funds required compared with what was provided; and (3) methods for acquiring additional funds, among them efficiency measures leading to…

  2. Beta Testing in Social Work

    ERIC Educational Resources Information Center

    Traube, Dorian E.; Begun, Stephanie; Petering, Robin; Flynn, Marilyn L.

    2017-01-01

    The field of social work does not currently have a widely adopted method for expediting innovations into micro- or macropractice. Although it is common in fields such as engineering and business to have formal processes for accelerating scientific advances into consumer markets, few comparable mechanisms exist in the social sciences or social…

  3. Quantification of soil surface roughness evolution under simulated rainfall

    USDA-ARS?s Scientific Manuscript database

    Soil surface roughness is commonly identified as one of the dominant factors governing runoff and interrill erosion. The objective of this study was to compare several existing soil surface roughness indices and to test the Revised Triangular Prism surface area Method (RTPM) as a new approach to cal...

  4. Interdisciplines and Interdisciplinarity: Political Psychology and Psychohistory Compared

    ERIC Educational Resources Information Center

    Fuchsman, Ken

    2012-01-01

    Interdisciplines are specialties that connect ideas, methods, and findings from existing disciplines. Political psychology and psychohistory are interdisciplines which should have much in common, but even where they clearly intersect, their approaches usually diverge. Part of the reason for their dissimilarity lies in what each takes and rejects…

  5. Use of nutrient self selection as a diet refining tool in Tenebrio molitor (Coleoptera: Tenebrionidae)

    USDA-ARS?s Scientific Manuscript database

    A new method to refine existing dietary supplements for improving production of the yellow mealworm, Tenebrio molitor L. (Coleoptera: Tenebrionidae), was tested. Self selected ratios of 6 dietary ingredients by T. molitor larvae were used to produce a dietary supplement. This supplement was compared...

  6. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  7. Strain gage measurement errors in the transient heating of structural components

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1993-01-01

    Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.

  8. SLIC superpixels compared to state-of-the-art superpixel methods.

    PubMed

    Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine

    2012-11-01

    Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

  9. Extracting BI-RADS Features from Portuguese Clinical Texts.

    PubMed

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2012-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser's performance is comparable to the manual method.

  10. Comparing Health Status, Health Trajectories and Use of Health and Social Services between Children with and without Developmental Disabilities: A Population-Based Longitudinal Study in Manitoba

    ERIC Educational Resources Information Center

    Shooshtari, Shahin; Brownell, Marni; Mills, Rosemary S. L.; Dik, Natalia; Yu, Dickie C. T.; Chateau, Dan; Burchill, Charles A.; Wetzel, Monika

    2017-01-01

    Background: Little information exists on health of children with developmental disabilities (DDs) in the Canadian province of Manitoba. Method: The present authors linked 12 years of administrative data and compared health status, changes in health and access to health and social services between children with (n = 1877) and without (n = 5661) DDs…

  11. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation

    PubMed Central

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395

  12. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    PubMed

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.

  13. Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.

    PubMed

    Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B

    2015-02-10

    Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Scientific evaluation of the safety factor for the acceptable daily intake (ADI). Case study: butylated hydroxyanisole (BHA).

    PubMed

    Würtzen, G

    1993-01-01

    The principles of 'data-derived safety factors' are applied to toxicological and biochemical information on butylated hydroxyanisole (BHA). The calculated safety factor for an ADI is, by this method, comparable to the existing internationally recognized safety evaluations. Relevance for humans of forestomach tumours in rodents is discussed. The method provides a basis for organizing data in a way that permits an explicit assessment of its relevance.

  15. A time domain based method for the accurate measurement of Q-factor and resonance frequency of microwave resonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyüre, B.; Márkus, B. G.; Bernáth, B.

    2015-09-15

    We present a novel method to determine the resonant frequency and quality factor of microwave resonators which is faster, more stable, and conceptually simpler than the yet existing techniques. The microwave resonator is pumped with the microwave radiation at a frequency away from its resonance. It then emits an exponentially decaying radiation at its eigen-frequency when the excitation is rapidly switched off. The emitted microwave signal is down-converted with a microwave mixer, digitized, and its Fourier transformation (FT) directly yields the resonance curve in a single shot. Being a FT based method, this technique possesses the Fellgett (multiplex) and Connesmore » (accuracy) advantages and it conceptually mimics that of pulsed nuclear magnetic resonance. We also establish a novel benchmark to compare accuracy of the different approaches of microwave resonator measurements. This shows that the present method has similar accuracy to the existing ones, which are based on sweeping or modulating the frequency of the microwave radiation.« less

  16. Global antioxidant response of meat.

    PubMed

    Carrillo, Celia; Barrio, Ángela; Del Mar Cavia, María; Alonso-Torre, Sara

    2017-06-01

    The global antioxidant response (GAR) method uses an enzymatic digestion to release antioxidants from foods. Owing to the importance of digestion for protein breakdown and subsequent release of bioactive compounds, the aim of the present study was to compare the GAR method for meat with the existing methodologies: the extraction-based method and QUENCHER. Seven fresh meats were analyzed using ABTS and FRAP assays. Our results indicated that the GAR of meat was higher than the total antioxidant capacity (TAC) assessed with the traditional extraction-based method. When evaluated with GAR, the thermal treatment led to an increase in the TAC of the soluble fraction, contrasting with a decreased TAC after cooking measured using the extraction-based method. The effect of thermal treatment on the TAC assessed by the QUENCHER method seemed to be dependent on the assay applied, since results from ABTS differed from FRAP. Our results allow us to hypothesize that the activation of latent bioactive peptides along the gastrointestinal tract should be taken into consideration when evaluating the TAC of meat. Therefore, we conclude that the GAR method may be more appropriate for assessing the TAC of meat than the existing, most commonly used methods. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  17. Predicting protein complexes using a supervised learning method combined with local structural information.

    PubMed

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  18. An Eye Model for Computational Dosimetry Using A Multi-Scale Voxel Phantom

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-06-01

    The lens of the eye is a radiosensitive tissue with cataract formation being the major concern. Recently reduced recommended dose limits to the lens of the eye have made understanding the dose to this tissue of increased importance. Due to memory limitations, the voxel resolution of computational phantoms used for radiation dose calculations is too large to accurately represent the dimensions of the eye. A revised eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and is then transformed into a high-resolution voxel model. This eye model is combined with an existing set of whole body models to form a multi-scale voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  19. A conjugate gradient method with descent properties under strong Wolfe line search

    NASA Astrophysics Data System (ADS)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  20. Finding consistent patterns: A nonparametric approach for identifying differential expression in RNA-Seq data

    PubMed Central

    Li, Jun; Tibshirani, Robert

    2015-01-01

    We discuss the identification of features that are associated with an outcome in RNA-Sequencing (RNA-Seq) and other sequencing-based comparative genomic experiments. RNA-Seq data takes the form of counts, so models based on the normal distribution are generally unsuitable. The problem is especially challenging because different sequencing experiments may generate quite different total numbers of reads, or ‘sequencing depths’. Existing methods for this problem are based on Poisson or negative binomial models: they are useful but can be heavily influenced by ‘outliers’ in the data. We introduce a simple, nonparametric method with resampling to account for the different sequencing depths. The new method is more robust than parametric methods. It can be applied to data with quantitative, survival, two-class or multiple-class outcomes. We compare our proposed method to Poisson and negative binomial-based methods in simulated and real data sets, and find that our method discovers more consistent patterns than competing methods. PMID:22127579

  1. Analytical evaluation of current starch methods used in the international sugar industry: Part I.

    PubMed

    Cole, Marsha; Eggleston, Gillian; Triplett, Alexa

    2017-08-01

    Several analytical starch methods exist in the international sugar industry to mitigate starch-related processing challenges and assess the quality of traded end-products. These methods use iodometric chemistry, mostly potato starch standards, and utilize similar solubilization strategies, but had not been comprehensively compared. In this study, industrial starch methods were compared to the USDA Starch Research method using simulated raw sugars. Type of starch standard, solubilization approach, iodometric reagents, and wavelength detection affected total starch determination in simulated raw sugars. Simulated sugars containing potato starch were more accurately detected by the industrial methods, whereas those containing corn starch, a better model for sugarcane starch, were only accurately measured by the USDA Starch Research method. Use of a potato starch standard curve over-estimated starch concentrations. Among the variables studied, starch standard, solubilization approach, and wavelength detection affected the sensitivity, accuracy/precision, and limited the detection/quantification of the current industry starch methods the most. Published by Elsevier Ltd.

  2. A Rapid Method for Measuring Strontium-90 Activity in Crops in China

    NASA Astrophysics Data System (ADS)

    Pan, Lingjing Pan; Yu, Guobing; Wen, Deyun; Chen, Zhi; Sheng, Liusi; Liu, Chung-King; Xu, X. George

    2017-09-01

    A rapid method for measuring Sr-90 activity in crop ashes is presented. Liquid scintillation counting, combined with ion exchange columns 4`, 4"(5")-di-t-butylcyclohexane-18-crown-6, is used to determine the activity of Sr-90 in crops. The yields of chemical procedure are quantified using gravimetric analysis. The conventional method that uses ion-exchange resin with HDEHP could not completely remove all the bismuth when comparatively large lead and bismuth exist in the samples. This is overcome by the rapid method. The chemical yield of this method is about 60% and the MDA for Sr-90 is found to be 2:32 Bq/kg. The whole procedure together with using spectrum analysis to determine the activity only takes about one day, which is really a large improvement compared with the conventional method. A modified conventional method is also described here to verify the value of the rapid one. These two methods can meet di_erent needs of daily monitoring and emergency situation.

  3. Characterization of fiber diameter using image analysis

    NASA Astrophysics Data System (ADS)

    Baheti, S.; Tunak, M.

    2017-10-01

    Due to high surface area and porosity, the applications of nanofibers have increased in recent years. In the production process, determination of average fiber diameter and fiber orientation is crucial for quality assessment. The objective of present study was to compare the relative performance of different methods discussed in literature for estimation of fiber diameter. In this work, the existing automated fiber diameter analysis software packages available in literature were developed and validated based on simulated images of known fiber diameter. Finally, all methods were compared for their reliable and accurate estimation of fiber diameter in electro spun nanofiber membranes based on obtained mean and standard deviation.

  4. Analysis of volatile organic compounds. [trace amounts of organic volatiles in gas samples

    NASA Technical Reports Server (NTRS)

    Zlatkis, A. (Inventor)

    1977-01-01

    An apparatus and method are described for reproducibly analyzing trace amounts of a large number of organic volatiles existing in a gas sample. Direct injection of the trapped volatiles into a cryogenic percolum provides a sharply defined plug. Applications of the method include: (1) analyzing the headspace gas of body fluids and comparing a profile of the organic volatiles with standard profiles for the detection and monitoring of disease; (2) analyzing the headspace gas of foods and beverages and comparing the profile with standard profiles to monitor and control flavor and aroma; and (3) analyses for determining the organic pollutants in air or water samples.

  5. Prediction and analysis of protein solubility using a novel scoring card method with dipeptide composition

    PubMed Central

    2012-01-01

    Background Existing methods for predicting protein solubility on overexpression in Escherichia coli advance performance by using ensemble classifiers such as two-stage support vector machine (SVM) based classifiers and a number of feature types such as physicochemical properties, amino acid and dipeptide composition, accompanied with feature selection. It is desirable to develop a simple and easily interpretable method for predicting protein solubility, compared to existing complex SVM-based methods. Results This study proposes a novel scoring card method (SCM) by using dipeptide composition only to estimate solubility scores of sequences for predicting protein solubility. SCM calculates the propensities of 400 individual dipeptides to be soluble using statistic discrimination between soluble and insoluble proteins of a training data set. Consequently, the propensity scores of all dipeptides are further optimized using an intelligent genetic algorithm. The solubility score of a sequence is determined by the weighted sum of all propensity scores and dipeptide composition. To evaluate SCM by performance comparisons, four data sets with different sizes and variation degrees of experimental conditions were used. The results show that the simple method SCM with interpretable propensities of dipeptides has promising performance, compared with existing SVM-based ensemble methods with a number of feature types. Furthermore, the propensities of dipeptides and solubility scores of sequences can provide insights to protein solubility. For example, the analysis of dipeptide scores shows high propensity of α-helix structure and thermophilic proteins to be soluble. Conclusions The propensities of individual dipeptides to be soluble are varied for proteins under altered experimental conditions. For accurately predicting protein solubility using SCM, it is better to customize the score card of dipeptide propensities by using a training data set under the same specified experimental conditions. The proposed method SCM with solubility scores and dipeptide propensities can be easily applied to the protein function prediction problems that dipeptide composition features play an important role. Availability The used datasets, source codes of SCM, and supplementary files are available at http://iclab.life.nctu.edu.tw/SCM/. PMID:23282103

  6. Comparison of 3D quantitative structure-activity relationship methods: Analysis of the in vitro antimalarial activity of 154 artemisinin analogues by hypothetical active-site lattice and comparative molecular field analysis

    NASA Astrophysics Data System (ADS)

    Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.

    1998-03-01

    Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.

  7. Comparative study on deposition of fluorine-doped tin dioxide thin films by conventional and ultrasonic spray pyrolysis methods for dye-sensitized solar modules

    NASA Astrophysics Data System (ADS)

    Icli, Kerem Cagatay; Kocaoglu, Bahadir Can; Ozenbas, Macit

    2018-01-01

    Fluorine-doped tin dioxide (FTO) thin films were produced via conventional spray pyrolysis and ultrasonic spray pyrolysis (USP) methods using alcohol-based solutions. The prepared films were compared in terms of crystal structure, morphology, surface roughness, visible light transmittance, and electronic properties. Upon investigation of the grain structures and morphologies, the films prepared using ultrasonic spray method provided relatively larger grains and due to this condition, carrier mobilities of these films exhibited slightly higher values. Dye-sensitized solar cells and 10×10 cm modules were prepared using commercially available and USP-deposited FTO/glass substrates, and solar performances were compared. It is observed that there exists no remarkable efficiency difference for both cells and modules, where module efficiency of the USP-deposited FTO glass substrates is 3.06% compared to commercial substrate giving 2.85% under identical conditions. We demonstrated that USP deposition is a low cost and versatile method of depositing commercial quality FTO thin films on large substrates employed in large area dye-sensitized solar modules or other thin film technologies.

  8. Evaluation of Existing Image Matching Methods for Deriving Glacier Surface Displacements Globally from Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Heid, T.; Kääb, A.

    2011-12-01

    Automatic matching of images from two different times is a method that is often used to derive glacier surface velocity. Nearly global repeat coverage of the Earth's surface by optical satellite sensors now opens the possibility for global-scale mapping and monitoring of glacier flow with a number of applications in, for example, glacier physics, glacier-related climate change and impact assessment, and glacier hazard management. The purpose of this study is to compare and evaluate different existing image matching methods for glacier flow determination over large scales. The study compares six different matching methods: normalized cross-correlation (NCC), the phase correlation algorithm used in the COSI-Corr software, and four other Fourier methods with different normalizations. We compare the methods over five regions of the world with different representative glacier characteristics: Karakoram, the European Alps, Alaska, Pine Island (Antarctica) and southwest Greenland. Landsat images are chosen for matching because they expand back to 1972, they cover large areas, and at the same time their spatial resolution is as good as 15 m for images after 1999 (ETM+ pan). Cross-correlation on orientation images (CCF-O) outperforms the three similar Fourier methods, both in areas with high and low visual contrast. NCC experiences problems in areas with low visual contrast, areas with thin clouds or changing snow conditions between the images. CCF-O has problems on narrow outlet glaciers where small window sizes (about 16 pixels by 16 pixels or smaller) are needed, and it also obtains fewer correct matches than COSI-Corr in areas with low visual contrast. COSI-Corr has problems on narrow outlet glaciers and it obtains fewer correct matches compared to CCF-O when thin clouds cover the surface, or if one of the images contains snow dunes. In total, we consider CCF-O and COSI-Corr to be the two most robust matching methods for global-scale mapping and monitoring of glacier velocities. If combining CCF-O with locally adaptive template sizes and by filtering the matching results automatically by comparing the displacement matrix to its low pass filtered version, the matching process can be automated to a large degree. This allows the derivation of glacier velocities with minimal (but not without!) user interaction and hence also opens up the possibility of global-scale mapping and monitoring of glacier flow.

  9. Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging

    PubMed Central

    Gupta, Disha; Hill, N. Jeremy; Adamo, Matthew A.; Ritaccio, Anthony; Schalk, Gerwin

    2014-01-01

    Introduction Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks. Methods We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images. Results Our fiducial-based method agreed with the MRI–CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations. PMID:25379417

  10. Clothing Protection from Ultraviolet Radiation: A New Method for Assessment.

    PubMed

    Gage, Ryan; Leung, William; Stanley, James; Reeder, Anthony; Barr, Michelle; Chambers, Tim; Smith, Moira; Signal, Louise

    2017-11-01

    Clothing modifies ultraviolet radiation (UVR) exposure from the sun and has an impact on skin cancer risk and the endogenous synthesis of vitamin D. There is no standardized method available for assessing body surface area (BSA) covered by clothing, which limits generalizability between study findings. We calculated the body cover provided by 38 clothing items using diagrams of BSA, adjusting the values to account for differences in BSA by age. Diagrams displaying each clothing item were developed and incorporated into a coverage assessment procedure (CAP). Five assessors used the CAP and Lund & Browder chart, an existing method for estimating BSA, to calculate the clothing coverage of an image sample of 100 schoolchildren. Values of clothing coverage, inter-rater reliability and assessment time were compared between CAP and Lund & Browder methods. Both methods had excellent inter-rater reliability (>0.90) and returned comparable results, although the CAP method was significantly faster in determining a person's clothing coverage. On balance, the CAP method appears to be a feasible method for calculating clothing coverage. Its use could improve comparability between sun-safety studies and aid in quantifying the health effects of UVR exposure. © 2017 The American Society of Photobiology.

  11. Estimating dietary costs of low-income women in California: a comparison of 2 approaches.

    PubMed

    Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam; Townsend, Marilyn S

    2013-04-01

    Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher.

  12. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  13. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  14. [Sex survey research in Germany and Europe : Liebesleben (LoveLives): A pilot study into the sexual experiences, attitudes and relationships of adults in Germany].

    PubMed

    Matthiesen, Silja; Dekker, Arne; von Rueden, Ursula; Winkelmann, Christine; Wendt, Janine; Briken, Peer

    2017-09-01

    At the Hamburg Institute for Sex Research in Germany, a nationwide study is currently being carried out into the sexual experiences, attitudes and relationships of adults (18-75 years). The main focus of this pilot study is to test the comprehensibility and length of a data collecting instrument as well as the comparison of two data collecting methods with regard to reliability and representativeness of the results as well as of the refusal rate. To this end face-to-face interviews (n = 500) and questionnaires sent by post (n = 500) are to be compared with each other as methods. The data to be collected relates to sexuality, particularly the prevention of HIV and other sexually transmitted infections (STIs). The WHO definition of sexual health forms the basis for the study and thus connects up with the existing sex survey research in Europe and western industrial nations. Comparable surveys have been conducted over the past ten years in more than 30 European countries using a variety of methods. The focus of the study is placed upon the increase that has been observed for several years now in certain STIs. The article provides an overview of existing sex survey research in Europe. It becomes clear that the studies conducted so far are very heterogeneous with regard to chosen method, sampling techniques and the choice of content focus, so that no suitable data for cross-national comparability are currently available.

  15. COMPARISON OF TWO METHODS FOR THE ISOLATION OF SALMONELLAE FROM IMPORTED FOODS.

    PubMed

    TAYLOR, W I; HOBBS, B C; SMITH, M E

    1964-01-01

    Two methods for the detection of salmonellae in foods were compared in 179 imported meat and egg samples. The number of positive samples and replications, and the number of strains and kinds of serotypes were statistically comparable by both the direct enrichment method of the Food Hygiene Laboratory in England, and the pre-enrichment method devised for processed foods in the United States. Boneless frozen beef, veal, and horsemeat imported from five countries for consumption in England were found to have salmonellae present in 48 of 116 (41%) samples. Dried egg products imported from three countries were observed to have salmonellae in 10 of 63 (16%) samples. The high incidence of salmonellae isolated from imported foods illustrated the existence of an international health hazard resulting from the continuous introduction of exogenous strains of pathogenic microorganisms on a large scale.

  16. Analysis of Test Case Computations and Experiments for the First Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Heeg, Jennifer; Wieseman, Carol D.; Chwalowski, Pawel

    2013-01-01

    This paper compares computational and experimental data from the Aeroelastic Prediction Workshop (AePW) held in April 2012. This workshop was designed as a series of technical interchange meetings to assess the state of the art of computational methods for predicting unsteady flowfields and static and dynamic aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques to simulate aeroelastic problems and to identify computational and experimental areas needing additional research and development. Three subject configurations were chosen from existing wind-tunnel data sets where there is pertinent experimental data available for comparison. Participant researchers analyzed one or more of the subject configurations, and results from all of these computations were compared at the workshop.

  17. When Do Financial Incentives Reduce Intrinsic Motivation? Comparing Behaviors Studied in Psychological and Economic Literatures

    PubMed Central

    2013-01-01

    Objective: To review existing evidence on the potential of incentives to undermine or “crowd out” intrinsic motivation, in order to establish whether and when it predicts financial incentives to crowd out motivation for health-related behaviors. Method: We conducted a conceptual analysis to compare definitions and operationalizations of the effect, and reviewed existing evidence to identify potential moderators of the effect. Results: In the psychological literature, we find strong evidence for an undermining effect of tangible rewards on intrinsic motivation for simple tasks when motivation manifest in behavior is initially high. In the economic literature, evidence for undermining effects exists for a broader variety of behaviors, in settings that involve a conflict of interest between parties. By contrast, for health related behaviors, baseline levels of incentivized behaviors are usually low, and only a subset involve an interpersonal conflict of interest. Correspondingly, we find no evidence for crowding out of incentivized health behaviors. Conclusion: The existing evidence does not warrant a priori predictions that an undermining effect would be found for health-related behaviors. Health-related behaviors and incentives schemes differ greatly in moderating characteristics, which should be the focus of future research. PMID:24001245

  18. A novel method of utilizing permeable reactive kiddle (PRK) for the remediation of acid mine drainage.

    PubMed

    Lee, Woo-Chun; Lee, Sang-Woo; Yun, Seong-Taek; Lee, Pyeong-Koo; Hwang, Yu Sik; Kim, Soon-Oh

    2016-01-15

    Numerous technologies have been developed and applied to remediate AMD, but each has specific drawbacks. To overcome the limitations of existing methods and improve their effectiveness, we propose a novel method utilizing permeable reactive kiddle (PRK). This manuscript explores the performance of the PRK method. In line with the concept of green technology, the PRK method recycles industrial waste, such as steel slag and waste cast iron. Our results demonstrate that the PRK method can be applied to remediate AMD under optimal operational conditions. Especially, this method allows for simple installation and cheap expenditure, compared with established technologies. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks

    PubMed Central

    Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo

    2012-01-01

    Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190

  20. [Recurrence plot analysis of HRV for brain ischemia and asphyxia].

    PubMed

    Chen, Xiaoming; Qiu, Yihong; Zhu, Yisheng

    2008-02-01

    Heart rate variability (HRV) is the tiny variability existing in the cycles of the heart beats, which reflects the corresponding balance between sympathetic and vagus nerves. Since the nonlinear characteristic of HRV is confirmed, the Recurrence Plot method, a nonlinear dynamic analysis method based on the complexity, could be used to analyze HRV. The results showed the recurrence plot structures and some quantitative indices (L-Mean, L-Entr) during asphyxia insult vary significantly as compared to those in normal conditions, which offer a new method to monitor brain asphyxia injury.

  1. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    NASA Astrophysics Data System (ADS)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  2. Data-Driven Benchmarking of Building Energy Efficiency Utilizing Statistical Frontier Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavousian, A; Rajagopal, R

    2014-01-01

    Frontier methods quantify the energy efficiency of buildings by forming an efficient frontier (best-practice technology) and by comparing all buildings against that frontier. Because energy consumption fluctuates over time, the efficiency scores are stochastic random variables. Existing applications of frontier methods in energy efficiency either treat efficiency scores as deterministic values or estimate their uncertainty by resampling from one set of measurements. Availability of smart meter data (repeated measurements of energy consumption of buildings) enables using actual data to estimate the uncertainty in efficiency scores. Additionally, existing applications assume a linear form for an efficient frontier; i.e.,they assume that themore » best-practice technology scales up and down proportionally with building characteristics. However, previous research shows that buildings are nonlinear systems. This paper proposes a statistical method called stochastic energy efficiency frontier (SEEF) to estimate a bias-corrected efficiency score and its confidence intervals from measured data. The paper proposes an algorithm to specify the functional form of the frontier, identify the probability distribution of the efficiency score of each building using measured data, and rank buildings based on their energy efficiency. To illustrate the power of SEEF, this paper presents the results from applying SEEF on a smart meter data set of 307 residential buildings in the United States. SEEF efficiency scores are used to rank individual buildings based on energy efficiency, to compare subpopulations of buildings, and to identify irregular behavior of buildings across different time-of-use periods. SEEF is an improvement to the energy-intensity method (comparing kWh/sq.ft.): whereas SEEF identifies efficient buildings across the entire spectrum of building sizes, the energy-intensity method showed bias toward smaller buildings. The results of this research are expected to assist researchers and practitioners compare and rank (i.e.,benchmark) buildings more robustly and over a wider range of building types and sizes. Eventually, doing so is expected to result in improved resource allocation in energy-efficiency programs.« less

  3. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  4. Reconstructing gravitational wave source parameters via direct comparisons to numerical relativity I: Method

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei

    2016-03-01

    In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.

  5. Research of ceramic matrix for a safe immobilization of radioactive sludge waste

    NASA Astrophysics Data System (ADS)

    Dorofeeva, Ludmila; Orekhov, Dmitry

    2018-03-01

    The research and improvement of the existing method for radioactive waste hardening by fixation in a ceramic matrix was carried out. For the samples covered with the sodium silicate and tested after the storage on the air the speed of a radionuclides leaching was determined. The properties of a clay ceramics and the optimum conditions of sintering were defined. The experimental data about the influence of a temperature mode sintering, water quantities, sludge and additives in the samples on their mechanical durability and a water resistance were obtained. The comparative analysis of the conducted research is aimed at improvement of the existing method of the hardening radioactive waste by inclusion in a ceramic matrix and reveals the advantages of the received results over analogs.

  6. 2 Major incident triage and the implementation of a new triage tool, the MPTT-24.

    PubMed

    Vassallo, James; Smith, Jason

    2017-12-01

    Over the last decade, a number of European cities including London, have witnessed high profile terrorist attacks resulting in major incidents with large numbers of casualties. Triage, the process of categorising casualties on the basis of their clinical acuity, is a key principle in the effective management of major incidents.The Modified Physiological Triage Tool (MPTT) is a recently developed primary triage tool which in comparison to existing triage tools, including the 2013 UK NARU Sieve, demonstrates the greatest sensitivity at predicting need for life-saving intervention (LSI) within both military and civilian populations.To improve the applicability and usability of the MPTT we increased the upper respiratory rate threshold to 24 breaths per minute (MPTT-24), to make it divisible by four, and included an assessment of external catastrophic haemorrhage. The aim of this study was to conduct a feasibility analysis of the proposed MPTT-24 (figure 1).emermed;34/12/A860-b/F1F1F1Figure 1MPTT-24 METHODS: A retrospective review of the Joint Theatre Trauma Registry (JTTR) and Trauma Audit Research Network (TARN) databases was performed for all adult ( > 18 years) patients presenting between 2006-2013 (JTTR) and 2014 (TARN). Patients were defined as priority one (P1) if they had received one or more life-saving interventions.Using first recorded hospital physiology, patients were categorised as P1 or not-P1 by existing triage tools and both MPTT and MPTT-24. Performance characteristics were evaluated using sensitivity, specificity, under and over-triage with a McNemar test to determine statistical significance. Basic study characteristics are shown in Table 1. Both the MPTT and MPTT-24 outperformed all existing triage methods with a statistically significant (p<0.001) absolute reduction of between 25.5%-29.5% in under-triage when compared to existing UK civilian methods (NARU Sieve). In both populations the MPTT-24 demonstrated an absolute reduction in sensitivity with an increase in specificity when compared to the MPTT. A statistically significant difference was observed between the MPTT and MPTT-24 in the way they categorised TARN and JTTR cases as P1 (p<0.001).emermed;34/12/A860-b/T1F2T1Table 1Study characteristicsemermed;34/12/A860-b/T2F3T2Table 2Performance analysis CONCLUSION: Existing UK methods of primary major incident triage, including the NARU Sieve, are not fit for purpose, with unacceptably high rates of under-triage. When compared to the MPTT, the MPTT-24 allows for a more rapid triage assessment and continues to outperform existing triage tools at predicting need for life-saving intervention. Its use should be considered in civilian and military major incidents. © 2017, Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix.

    PubMed

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy; He, Jing

    2017-01-01

    Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4-7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map-model pairs.

  8. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix

    PubMed Central

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy

    2017-01-01

    Abstract Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4–7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map–model pairs. PMID:27936925

  9. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Automatic coronary artery segmentation based on multi-domains remapping and quantile regression in angiographies.

    PubMed

    Li, Zhixun; Zhang, Yingtao; Gong, Huiling; Li, Weimin; Tang, Xianglong

    2016-12-01

    Coronary artery disease has become the most dangerous diseases to human life. And coronary artery segmentation is the basis of computer aided diagnosis and analysis. Existing segmentation methods are difficult to handle the complex vascular texture due to the projective nature in conventional coronary angiography. Due to large amount of data and complex vascular shapes, any manual annotation has become increasingly unrealistic. A fully automatic segmentation method is necessary in clinic practice. In this work, we study a method based on reliable boundaries via multi-domains remapping and robust discrepancy correction via distance balance and quantile regression for automatic coronary artery segmentation of angiography images. The proposed method can not only segment overlapping vascular structures robustly, but also achieve good performance in low contrast regions. The effectiveness of our approach is demonstrated on a variety of coronary blood vessels compared with the existing methods. The overall segmentation performances si, fnvf, fvpf and tpvf were 95.135%, 3.733%, 6.113%, 96.268%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A new approach to estimate time-to-cure from cancer registries data.

    PubMed

    Boussari, Olayidé; Romain, Gaëlle; Remontet, Laurent; Bossard, Nadine; Mounier, Morgane; Bouvier, Anne-Marie; Binquet, Christine; Colonna, Marc; Jooste, Valérie

    2018-04-01

    Cure models have been adapted to net survival context to provide important indicators from population-based cancer data, such as the cure fraction and the time-to-cure. However existing methods for computing time-to-cure suffer from some limitations. Cure models in net survival framework were briefly overviewed and a new definition of time-to-cure was introduced as the time TTC at which P(t), the estimated covariate-specific probability of being cured at a given time t after diagnosis, reaches 0.95. We applied flexible parametric cure models to data of four cancer sites provided by the French network of cancer registries (FRANCIM). Then estimates of the time-to-cure by TTC and by two existing methods were derived and compared. Cure fractions and probabilities P(t) were also computed. Depending on the age group, TTC ranged from to 8 to 10 years for colorectal and pancreatic cancer and was nearly 12 years for breast cancer. In thyroid cancer patients under 55 years at diagnosis, TTC was strikingly 0: the probability of being cured was >0.95 just after diagnosis. This is an interesting result regarding the health insurance premiums of these patients. The estimated values of time-to-cure from the three approaches were close for colorectal cancer only. We propose a new approach, based on estimated covariate-specific probability of being cured, to estimate time-to-cure. Compared to two existing methods, the new approach seems to be more intuitive and natural and less sensitive to the survival time distribution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Determination of laser cutting process conditions using the preference selection index method

    NASA Astrophysics Data System (ADS)

    Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan

    2017-03-01

    Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.

  13. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-02-08

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.

  14. A comparison of viscoelastic damping models

    NASA Technical Reports Server (NTRS)

    Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.

    1993-01-01

    Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.

  15. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination

    PubMed Central

    2018-01-01

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759

  16. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  17. Methods for artifact detection and removal from scalp EEG: A review.

    PubMed

    Islam, Md Kafiul; Rastegarnia, Amir; Yang, Zhi

    2016-11-01

    Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  18. An efficient genome-wide association test for multivariate phenotypes based on the Fisher combination function.

    PubMed

    Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne

    2016-01-05

    In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.

  19. Minimum Description Length Block Finder, a Method to Identify Haplotype Blocks and to Compare the Strength of Block Boundaries

    PubMed Central

    Mannila, H.; Koivisto, M.; Perola, M.; Varilo, T.; Hennah, W.; Ekelund, J.; Lukk, M.; Peltonen, L.; Ukkonen, E.

    2003-01-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates. PMID:12761696

  20. Minimum description length block finder, a method to identify haplotype blocks and to compare the strength of block boundaries.

    PubMed

    Mannila, H; Koivisto, M; Perola, M; Varilo, T; Hennah, W; Ekelund, J; Lukk, M; Peltonen, L; Ukkonen, E

    2003-07-01

    We describe a new probabilistic method for finding haplotype blocks that is based on the use of the minimum description length (MDL) principle. We give a rigorous definition of the quality of a segmentation of a genomic region into blocks and describe a dynamic programming algorithm for finding the optimal segmentation with respect to this measure. We also describe a method for finding the probability of a block boundary for each pair of adjacent markers: this gives a tool for evaluating the significance of each block boundary. We have applied the method to the published data of Daly and colleagues. The results expose some problems that exist in the current methods for the evaluation of the significance of predicted block boundaries. Our method, MDL block finder, can be used to compare block borders in different sample sets, and we demonstrate this by applying the MDL-based method to define the block structure in chromosomes from population isolates.

  1. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    PubMed

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.

  2. Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging.

    PubMed

    Gupta, Disha; Hill, N Jeremy; Adamo, Matthew A; Ritaccio, Anthony; Schalk, Gerwin

    2014-01-01

    Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks. We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images. Our fiducial-based method agreed with the MRI-CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations.

  3. Extracting BI-RADS Features from Portuguese Clinical Texts

    PubMed Central

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2013-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method. PMID:23797461

  4. CRITICA: coding region identification tool invoking comparative analysis

    NASA Technical Reports Server (NTRS)

    Badger, J. H.; Olsen, G. J.; Woese, C. R. (Principal Investigator)

    1999-01-01

    Gene recognition is essential to understanding existing and future DNA sequence data. CRITICA (Coding Region Identification Tool Invoking Comparative Analysis) is a suite of programs for identifying likely protein-coding sequences in DNA by combining comparative analysis of DNA sequences with more common noncomparative methods. In the comparative component of the analysis, regions of DNA are aligned with related sequences from the DNA databases; if the translation of the aligned sequences has greater amino acid identity than expected for the observed percentage nucleotide identity, this is interpreted as evidence for coding. CRITICA also incorporates noncomparative information derived from the relative frequencies of hexanucleotides in coding frames versus other contexts (i.e., dicodon bias). The dicodon usage information is derived by iterative analysis of the data, such that CRITICA is not dependent on the existence or accuracy of coding sequence annotations in the databases. This independence makes the method particularly well suited for the analysis of novel genomes. CRITICA was tested by analyzing the available Salmonella typhimurium DNA sequences. Its predictions were compared with the DNA sequence annotations and with the predictions of GenMark. CRITICA proved to be more accurate than GenMark, and moreover, many of its predictions that would seem to be errors instead reflect problems in the sequence databases. The source code of CRITICA is freely available by anonymous FTP (rdp.life.uiuc.edu in/pub/critica) and on the World Wide Web (http:/(/)rdpwww.life.uiuc.edu).

  5. Image processing via level set curvature flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malladi, R.; Sethian, J.A.

    We present a controlled image smoothing and enhancement method based on a curvature flow interpretation of the geometric heat equation. Compared to existing techniques, the model has several distinct advantages. (i) It contains just one enhancement parameter. (ii) The scheme naturally inherits a stopping criterion from the image; continued application of the scheme produces no further change. (iii) The method is one of the fastest possible schemes based on a curvature-controlled approach. 15 ref., 6 figs.

  6. Algorithms for the explicit computation of Penrose diagrams

    NASA Astrophysics Data System (ADS)

    Schindler, J. C.; Aguirre, A.

    2018-05-01

    An algorithm is given for explicitly computing Penrose diagrams for spacetimes of the form . The resulting diagram coordinates are shown to extend the metric continuously and nondegenerately across an arbitrary number of horizons. The method is extended to include piecewise approximations to dynamically evolving spacetimes using a standard hypersurface junction procedure. Examples generated by an implementation of the algorithm are shown for standard and new cases. In the appendix, this algorithm is compared to existing methods.

  7. Direct amination of γ-halo-β-ketoesters with anilines

    PubMed Central

    Zhang, Yinan; Silverman, Richard B.

    2012-01-01

    The direct amination of α-haloacetoacetates with anilines is described. Compared to existing methods, this simple protocol provides an attractive strategy to prepare diverse γ-anilino-β-ketoesters in one step. Good to excellent yields of the amination products were obtained under robust conditions, providing versatile and useful scaffolds. PMID:22390154

  8. Motivating Latino Caregivers of Children with Asthma to Quit Smoking: A Randomized Trial

    ERIC Educational Resources Information Center

    Borrelli, Belinda; McQuaid, Elizabeth L.; Novak, Scott P.; Hammond, S. Katharine; Becker, Bruce

    2010-01-01

    Objective: Secondhand smoke exposure is associated with asthma onset and exacerbation. Latino children have higher rates of asthma morbidity than other groups. The current study compared the effectiveness of a newly developed smoking cessation treatment with existing clinical guidelines for smoking cessation. Method: Latino caregivers who smoked…

  9. An Integrated Model for Effective Knowledge Management in Chinese Organizations

    ERIC Educational Resources Information Center

    An, Xiaomi; Deng, Hepu; Wang, Yiwen; Chao, Lemen

    2013-01-01

    Purpose: The purpose of this paper is to provide organizations in the Chinese cultural context with a conceptual model for an integrated adoption of existing knowledge management (KM) methods and to improve the effectiveness of their KM activities. Design/methodology/approaches: A comparative analysis is conducted between China and the western…

  10. Oral Assessment in Mathematics: Implementation and Outcomes

    ERIC Educational Resources Information Center

    Iannone, P.; Simpson, A.

    2012-01-01

    In this article, we report the planning and implementation of an oral assessment component in a first-year pure mathematics module of a degree course in mathematics. Our aim was to examine potential barriers to using oral assessments, explore the advantages and disadvantages compared to existing common assessment methods and document the outcomes…

  11. Optimization and comparative analysis of plant organellar DNA enrichment methods suitable for next generation sequencing

    USDA-ARS?s Scientific Manuscript database

    Plant organellar genomes contain large repetitive elements that may undergo pairing or recombination to form complex structures and/or sub-genomic fragments. Organellar genomes also exist in admixtures within a given cell or tissue type (heteroplasmy) and abundance of sub-types may change through de...

  12. Effects of Alternate Test Formats in Online Courses

    ERIC Educational Resources Information Center

    Francis, Alan

    2010-01-01

    The purpose of this study was to compare differences in methods of testing for two undergraduate online courses to determine the effect of alternate test formats in relation to participant grades. Specific purposes of this study were to determine whether a difference existed in student test scores between the control and treatment groups and…

  13. The Effects of Kolb's Experiential Learning Model on Successful Intelligence in Secondary Agriculture Students

    ERIC Educational Resources Information Center

    Baker, Marshall A.; Robinson, J. Shane

    2016-01-01

    Experiential learning is an important pedagogical approach used in secondary agricultural education. Though anecdotal evidence supports the use of experiential learning, a paucity of empirical research exists supporting the effects of this approach when compared to a more conventional teaching method, such as direct instruction. Therefore, the…

  14. Providing Adaptivity in Moodle LMS Courses

    ERIC Educational Resources Information Center

    Despotovic-Zrakic, Marijana; Markovic, Aleksandar; Bogdanovic, Zorica; Barac, Dusan; Krco, Srdjan

    2012-01-01

    In this paper, we describe an approach to providing adaptivity in e-education courses. The primary goal of the paper is to enhance an existing e-education system, namely Moodle LMS, by developing a method for creating adaptive courses, and to compare its effectiveness with non-adaptive education approach. First, we defined the basic requirements…

  15. Research and engineering assessment of biological solubilization of phosphate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, R.D.; McIlwain, M.E.; Losinski, S.J.

    This research and engineering assessment examined a microbial phosphate solubilization process as a method of recovering phosphate from phosphorus containing ore compared to the existing wet acid and electric arc methods. A total of 860 microbial isolates, collected from a range of natural environments were tested for their ability to solubilize phosphate from rock phosphate. A bacterium (Pseudomonas cepacia) was selected for extensive characterization and evaluation of the mechanism of phosphate solubilization and of process engineering parameters necessary to recover phosphate from rock phosphate. These studies found that concentration of hydrogen ion and production of organic acids arising from oxidationmore » of the carbon source facilitated microbial solubilization of both pure chemical insoluble phosphate compounds and phosphate rock. Genetic studies found that phosphate solubilization was linked to an enzyme system (glucose dehydrogenase). Process-related studies found that a critical solids density of 1% by weight (ore to liquid) was necessary for optimal solubilization. An engineering analysis evaluated the cost and energy requirements for a 2 million ton per year sized plant, whose size was selected to be comparable to existing wet acid plants.« less

  16. A novel minimum cost maximum power algorithm for future smart home energy management.

    PubMed

    Singaravelan, A; Kowsalya, M

    2017-11-01

    With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.

  17. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    NASA Astrophysics Data System (ADS)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  18. Estimation of the behavior factor of existing RC-MRF buildings

    NASA Astrophysics Data System (ADS)

    Vona, Marco; Mastroberti, Monica

    2018-01-01

    In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.

  19. Differences exist across insurance schemes in China post-consolidation

    PubMed Central

    Yi, Danhui; Wang, Xiaojun; Jiang, Yan; Wang, Yu; Liu, Xinchun

    2017-01-01

    Background In China, the basic insurance system consists of three schemes: the UEBMI (Urban Employee Basic Medical Insurance), URBMI (Urban Resident Basic Medical Insurance), and NCMS (New Cooperative Medical Scheme), across which significant differences have been observed. Since 2009, the central government has been experimenting with consolidating these schemes in selected areas. This study examines whether differences still exist across schemes after the consolidation. Methods A survey was conducted in the city of Suzhou, collecting data on subjects 45 years old and above with at least one inpatient or outpatient treatment during a period of twelve months. Analysis on 583 subjects was performed comparing subjects’ characteristics across insurance schemes. A resampling-based method was applied to compute the predicted gross medical cost, OOP (out-of-pocket) cost, and insurance reimbursement rate. Results Subjects under different insurance schemes differ in multiple aspects. For inpatient treatments, subjects under the URBMI have the highest observed and predicted gross and OOP costs, while those under the UEBMI have the lowest. For outpatient treatments, subjects under the UEBMI and URBMI have comparable costs, while those under the NCMS have much lower costs. Subjects under the NCMS also have a much lower reimbursement rate. Conclusions Differences still exist across schemes in medical costs and insurance reimbursement rate post-consolidation. Further investigations are needed to identify the causes, and interventions are needed to eliminate such differences. PMID:29125837

  20. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Some new exact solitary wave solutions of the van der Waals model arising in nature

    NASA Astrophysics Data System (ADS)

    Bibi, Sadaf; Ahmed, Naveed; Khan, Umar; Mohyud-Din, Syed Tauseef

    2018-06-01

    This work proposes two well-known methods, namely, Exponential rational function method (ERFM) and Generalized Kudryashov method (GKM) to seek new exact solutions of the van der Waals normal form for the fluidized granular matter, linked with natural phenomena and industrial applications. New soliton solutions such as kink, periodic and solitary wave solutions are established coupled with 2D and 3D graphical patterns for clarity of physical features. Our comparison reveals that the said methods excel several existing methods. The worked-out solutions show that the suggested methods are simple and reliable as compared to many other approaches which tackle nonlinear equations stemming from applied sciences.

  2. A pilot exploratory investigation on pregnant women's views regarding STan fetal monitoring technology.

    PubMed

    Bryson, Kate; Wilkinson, Chris; Kuah, Sabrina; Matthews, Geoff; Turnbull, Deborah

    2017-12-29

    Women's views are critical for informing the planning and delivery of maternity care services. ST segment analysis (STan) is a promising method to more accurately detect when unborn babies are at risk of brain damage or death during labour that is being trialled for the first time in Australia. This is the first study to examine women's views about STan monitoring in this context. Semi-structured interviews were conducted with pregnant women recruited across a range of clinical locations at the study hospital. The interviews included hypothetical scenarios to assess women's prospective views about STan monitoring (as an adjunct to cardiotocography, (CTG)) compared to the existing fetal monitoring method of CTG alone. This article describes findings from an inductive and descriptive thematic analysis. Most women preferred the existing fetal monitoring method compared to STan monitoring; women's decision-making was multifaceted. Analysis yielded four themes relating to women's views towards fetal monitoring in labour: a) risk and labour b) mobility in labour c) autonomy and choice in labour d) trust in maternity care providers. Findings suggest that women's views towards CTG and STan monitoring are multifaceted, and appear to be influenced by individual labour preferences and the information being received and understood. This underlies the importance of clear communication between maternity care providers and women about technology use in intrapartum care. This research is now being used to inform the implementation of the first properly powered Australian randomised trial comparing STan and CTG monitoring.

  3. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  4. Can Mathematical Models Predict the Outcomes of Prostate Cancer Patients Undergoing Intermittent Androgen Deprivation Therapy?

    NASA Astrophysics Data System (ADS)

    Everett, R. A.; Packer, A. M.; Kuang, Y.

    2014-04-01

    Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.

  5. Detection of oral HPV infection - Comparison of two different specimen collection methods and two HPV detection methods.

    PubMed

    de Souza, Marjorie M A; Hartel, Gunter; Whiteman, David C; Antonsson, Annika

    2018-04-01

    Very little is known about the natural history of oral HPV infection. Several different methods exist to collect oral specimens and detect HPV, but their respective performance characteristics are unknown. We compared two different methods for oral specimen collection (oral saline rinse and commercial saliva kit) from 96 individuals and then analyzed the samples for HPV by two different PCR detection methods (single GP5+/6+ PCR and nested MY09/11 and GP5+/6+ PCR). For the oral rinse samples, the oral HPV prevalence was 10.4% (GP+ PCR; 10% repeatability) vs 11.5% (nested PCR method; 100% repeatability). For the commercial saliva kit samples, the prevalences were 3.1% vs 16.7% with the GP+ PCR vs the nested PCR method (repeatability 100% for both detection methods). Overall the agreement was fair or poor between samples and methods (kappa 0.06-0.36). Standardizing methods of oral sample collection and HPV detection would ensure comparability between future oral HPV studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  7. A low-complexity attitude control method for large-angle agile maneuvers of a spacecraft with control moment gyros

    NASA Astrophysics Data System (ADS)

    Kawajiri, Shota; Matunaga, Saburo

    2017-10-01

    This study examines a low-complexity control method that satisfies mechanical constraints by using control moment gyros for an agile maneuver. The method is designed based on the fact that a simple rotation around an Euler's principal axis corresponds to a well-approximated solution of a time-optimal rest-to-rest maneuver. With respect to an agile large-angle maneuver using CMGs, it is suggested that there exists a coasting period in which all gimbal angles are constant, and a constant body angular velocity is almost along the Euler's principal axis. The gimbals are driven such that the coasting period is generated in the proposed method. This allows the problem to be converted into obtaining only a coasting time and gimbal angles such that their combination maximizes body angular velocity along the rotational axis of the maneuver. The effectiveness of the proposed method is demonstrated by using numerical simulations. The results indicate that the proposed method shortens the settling time by 20-70% when compared to that of a traditional feedback method. Additionally, a comparison with an existing path planning method shows that the proposed method achieves a low computational complexity (that is approximately 150 times faster) and a certain level of shortness in the settling time.

  8. Segmentation of malignant lesions in 3D breast ultrasound using a depth-dependent model.

    PubMed

    Tan, Tao; Gubern-Mérida, Albert; Borelli, Cristina; Manniesing, Rashindra; van Zelst, Jan; Wang, Lei; Zhang, Wei; Platel, Bram; Mann, Ritse M; Karssemeijer, Nico

    2016-07-01

    Automated 3D breast ultrasound (ABUS) has been proposed as a complementary screening modality to mammography for early detection of breast cancers. To facilitate the interpretation of ABUS images, automated diagnosis and detection techniques are being developed, in which malignant lesion segmentation plays an important role. However, automated segmentation of cancer in ABUS is challenging since lesion edges might not be well defined. In this study, the authors aim at developing an automated segmentation method for malignant lesions in ABUS that is robust to ill-defined cancer edges and posterior shadowing. A segmentation method using depth-guided dynamic programming based on spiral scanning is proposed. The method automatically adjusts aggressiveness of the segmentation according to the position of the voxels relative to the lesion center. Segmentation is more aggressive in the upper part of the lesion (close to the transducer) than at the bottom (far away from the transducer), where posterior shadowing is usually visible. The authors used Dice similarity coefficient (Dice) for evaluation. The proposed method is compared to existing state of the art approaches such as graph cut, level set, and smart opening and an existing dynamic programming method without depth dependence. In a dataset of 78 cancers, our proposed segmentation method achieved a mean Dice of 0.73 ± 0.14. The method outperforms an existing dynamic programming method (0.70 ± 0.16) on this task (p = 0.03) and it is also significantly (p < 0.001) better than graph cut (0.66 ± 0.18), level set based approach (0.63 ± 0.20) and smart opening (0.65 ± 0.12). The proposed depth-guided dynamic programming method achieves accurate breast malignant lesion segmentation results in automated breast ultrasound.

  9. Comparability among four invertebrate sampling methods and two multimetric indexes, Fountain Creek Basin, Colorado, 2010–2012

    USGS Publications Warehouse

    Bruce, James F.; Roberts, James J.; Zuellig, Robert E.

    2018-05-24

    The U.S. Geological Survey (USGS), in cooperation with Colorado Springs City Engineering and Colorado Springs Utilities, analyzed previously collected invertebrate data to determine the comparability among four sampling methods and two versions (2010 and 2017) of the Colorado Benthic Macroinvertebrate Multimetric Index (MMI). For this study, annual macroinvertebrate samples were collected concurrently (in space and time) at 15 USGS surface-water gaging stations in the Fountain Creek Basin from 2010 to 2012 using four sampling methods. The USGS monitoring project in the basin uses two of the methods and the Colorado Department of Public Health and Environment recommends the other two. These methods belong to two distinct sample types, one that targets single habitats and one that targets multiple habitats. The study results indicate that there are significant differences in MMI values obtained from the single-habitat and multihabitat sample types but methods from each program within each sample type produced comparable values. This study also determined that MMI values calculated by different versions of the Colorado Benthic Macroinvertebrate MMI are indistinguishable. This indicates that the Colorado Department of Public Health and Environment methods are comparable with the USGS monitoring project methods for single-habitat and multihabitat sample types. This report discusses the direct application of the study results to inform the revision of the existing USGS monitoring project in the Fountain Creek Basin.

  10. Schoolgirls' experience and appraisal of menstrual absorbents in rural Uganda: a cross-sectional evaluation of reusable sanitary pads.

    PubMed

    Hennegan, Julie; Dolan, Catherine; Wu, Maryalice; Scott, Linda; Montgomery, Paul

    2016-12-07

    Governments, multinational organisations, and charities have commenced the distribution of sanitary products to address current deficits in girls' menstrual management. The few effectiveness studies conducted have focused on health and education outcomes but have failed to provide quantitative assessment of girls' preferences, experiences of absorbents, and comfort. Objectives of the study were, first, to quantitatively describe girls' experiences with, and ratings of reliability and acceptability of different menstrual absorbents. Second, to compare ratings of freely-provided reusable pads (AFRIpads) to other existing methods of menstrual management. Finally, to assess differences in self-reported freedom of activity during menses according to menstrual absorbent. Cross-sectional, secondary analysis of data from the final survey of a controlled trial of reusable sanitary padand puberty education provision was undertaken. Participants were 205 menstruating schoolgirls from eight schools in rural Uganda. 72 girls who reported using the intervention-provided reusable pads were compared to those using existing improvised methods (predominately new or old cloth). Schoolgirls using reusable pads provided significantly higher ratings of perceived absorbent reliability across activities, less difficulties changing absorbents, and less disgust with cleaning absorbents. There were no significant differences in reports of outside garment soiling (OR 1.00 95%CI 0.51-1.99), or odour (0.84 95%CI 0.40-1.74) during the last menstrual period. When girls were asked if menstruation caused them to miss daily activities there were no differences between those using reusable pads and those using other existing methods. However, when asked about activities avoided during menstruation, those using reusable pads participated less in physical sports, working in the field, fetching water, and cooking. Reusable pads were rated favourably. This translated into some benefits for self-reported involvement in daily activities, although reports of actual soiling and missing activities due to menstruation did not differ. More research is needed comparing the impact of menstrual absorbents on girls' daily activities, and validating outcome measures for menstrual management research.

  11. Conditional entropy in variation-adjusted windows detects selection signatures associated with expression quantitative trait loci (eQTLs)

    PubMed Central

    2015-01-01

    Background Over the past 50,000 years, shifts in human-environmental or human-human interactions shaped genetic differences within and among human populations, including variants under positive selection. Shaped by environmental factors, such variants influence the genetics of modern health, disease, and treatment outcome. Because evolutionary processes tend to act on gene regulation, we test whether regulatory variants are under positive selection. We introduce a new approach to enhance detection of genetic markers undergoing positive selection, using conditional entropy to capture recent local selection signals. Results We use conditional logistic regression to compare our Adjusted Haplotype Conditional Entropy (H|H) measure of positive selection to existing positive selection measures. H|H and existing measures were applied to published regulatory variants acting in cis (cis-eQTLs), with conditional logistic regression testing whether regulatory variants undergo stronger positive selection than the surrounding gene. These cis-eQTLs were drawn from six independent studies of genotype and RNA expression. The conditional logistic regression shows that, overall, H|H is substantially more powerful than existing positive-selection methods in identifying cis-eQTLs against other Single Nucleotide Polymorphisms (SNPs) in the same genes. When broken down by Gene Ontology, H|H predictions are particularly strong in some biological process categories, where regulatory variants are under strong positive selection compared to the bulk of the gene, distinct from those GO categories under overall positive selection. . However, cis-eQTLs in a second group of genes lack positive selection signatures detectable by H|H, consistent with ancient short haplotypes compared to the surrounding gene (for example, in innate immunity GO:0042742); under such other modes of selection, H|H would not be expected to be a strong predictor.. These conditional logistic regression models are adjusted for Minor allele frequency(MAF); otherwise, ascertainment bias is a huge factor in all eQTL data sets. Relationships between Gene Ontology categories, positive selection and eQTL specificity were replicated with H|H in a single larger data set. Our measure, Adjusted Haplotype Conditional Entropy (H|H), was essential in generating all of the results above because it: 1) is a stronger overall predictor for eQTLs than comparable existing approaches, and 2) shows low sequential auto-correlation, overcoming problems with convergence of these conditional regression statistical models. Conclusions Our new method, H|H, provides a consistently more robust signal associated with cis-eQTLs compared to existing methods. We interpret this to indicate that some cis-eQTLs are under positive selection compared to their surrounding genes. Conditional entropy indicative of a selective sweep is an especially strong predictor of eQTLs for genes in several biological processes of medical interest. Where conditional entropy is a weak or negative predictor of eQTLs, such as innate immune genes, this would be consistent with balancing selection acting on such eQTLs over long time periods. Different measures of selection may be needed for variant prioritization under other modes of evolutionary selection. PMID:26111110

  12. Dimensionality reduction of collective motion by principal manifolds

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.

    2015-01-01

    While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.

  13. Technical Note: Preliminary investigations into the use of a functionalised polymer to reduce diffusion in Fricke gel dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, S. T., E-mail: s164.smith@qut.edu.au; Masters, K.-S.; Hosokawa, K.

    2015-12-15

    Purpose: A modification of the existing PVA-FX hydrogel has been made to investigate the use of a functionalised polymer in a Fricke gel dosimetry system to decrease Fe{sup 3+} diffusion. Methods: The chelating agent, xylenol orange, was chemically bonded to the gelling agent, polyvinyl alcohol (PVA) to create xylenol orange functionalised PVA (XO-PVA). A gel was created from the XO-PVA (20% w/v) with ferrous sulfate (0.4 mM) and sulfuric acid (50 mM). Results: This resulted in an optical density dose sensitivity of 0.014 Gy{sup −1}, an auto-oxidation rate of 0.0005 h{sup −1}, and a diffusion rate of 0.129 mm{sup 2}more » h{sup −1}; an 8% reduction compared to the original PVA-FX gel, which in practical terms adds approximately 1 h to the time span between irradiation and accurate read-out. Conclusions: Because this initial method of chemically bonding xylenol orange to polyvinyl alcohol has inherently low conversion, the improvement on existing gel systems is minimal when compared to the drawbacks. More efficient methods of functionalising polyvinyl alcohol with xylenol orange must be developed for this system to gain clinical relevance.« less

  14. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang

    2015-12-15

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less

  15. Subtask 4.27 - Evaluation of the Multielement Sorbent Trap (MEST) Method at an Illinois Coal-Fired Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlish, John; Thompson, Jeffrey; Dunham, Grant

    2014-09-30

    Owners of fossil fuel-fired power plants face the challenge of measuring stack emissions of trace metals and acid gases at much lower levels than in the past as a result of increasingly stringent regulations. In the United States, the current reference methods for trace metals and halogens are wet-chemistry methods, U.S. Environmental Protection Agency (EPA) Methods 29 and 26 or 26A, respectively. As a possible alternative to the EPA methods, the Energy & Environmental Research Center (EERC) has developed a novel multielement sorbent trap (MEST) method to be used to sample for trace elements and/or halogens. Sorbent traps offer amore » potentially advantageous alternative to the existing sampling methods, as they are simpler to use and do not require expensive, breakable glassware or handling and shipping of hazardous reagents. Field tests comparing two sorbent trap applications (MEST-H for hydrochloric acid and MEST-M for trace metals) with the reference methods were conducted at two power plant units fueled by Illinois Basin bituminous coal. For hydrochloric acid, MEST measured concentrations comparable to EPA Method 26A at two power plant units, one with and one without a wet flue gas desulfurization scrubber. MEST-H provided lower detection limits for hydrochloric acid than the reference method. Results from a dry stack unit had better comparability between methods than results from a wet stack unit. This result was attributed to the very low emissions in the latter unit, as well as the difficulty of sampling in a saturated flue gas. Based on these results, the MEST-H sorbent traps appear to be a good candidate to serve as an alternative to Method 26A (or 26). For metals, the MEST trap gave lower detection limits compared to EPA Method 29 and produced comparable data for antimony, arsenic, beryllium, cobalt, manganese, selenium, and mercury for most test runs. However, the sorbent material produced elevated blanks for cadmium, nickel, lead, and chromium at levels that would interfere with accurate measurement at U.S. hazardous air pollutant emission limits for existing coal-fired power plant units. Longer sampling times employed during this test program did appear to improve comparative results for these metals. Although the sorbent contribution to the sample was reduced through improved trap design, additional research is still needed to explore lower-background materials before the MEST-M application can be considered as a potential alternative method for all of the trace metals. This subtask was funded through the EERC–U.S. Department of Energy Joint Program on Research and Development for Fossil Energy-Related Resources Cooperative Agreement No. DE-FC26-08NT43291. Nonfederal funding was provided by the Electric Power Research Institute, the Illinois Clean Coal Institute, Southern Illinois Power Company, and the Center for Air Toxic Metals Affiliates Program.« less

  16. Evaluation of the immunogenicity of the dabigatran reversal agent idarucizumab during Phase I studies

    PubMed Central

    Norris, Stephen; Ramael, Steven; Ikushima, Ippei; Haazen, Wouter; Harada, Akiko; Moschetti, Viktoria; Imazu, Susumu; Reilly, Paul A.; Lang, Benjamin; Stangier, Joachim

    2017-01-01

    Aims Idarucizumab, a humanized monoclonal anti‐dabigatran antibody fragment, is effective in emergency reversal of dabigatran anticoagulation. Pre‐existing and treatment‐emergent anti‐idarucizumab antibodies (antidrug antibodies; ADA) may affect the safety and efficacy of idarucizumab. This analysis characterized the pre‐existing and treatment‐emergent ADA and assessed their impact on the pharmacokinetics and pharmacodynamics (PK/PD) of idarucizumab. Methods Data were pooled from three Phase I, randomized, double‐blind idarucizumab studies in healthy Caucasian subjects; elderly, renally impaired subjects; and healthy Japanese subjects. In plasma sampled before and after idarucizumab dosing, ADA were detected and titrated using a validated electrochemiluminescence method. ADA epitope specificities were examined using idarucizumab and two structurally related molecules. Idarucizumab PK/PD data were compared for subjects with and without pre‐existing ADA. Results Pre‐existing ADA were found in 33 out of 283 individuals (11.7%), seven of whom had intermittent ADA. Titres of pre‐existing and treatment‐emergent ADA were low, estimated equivalent to <0.3% of circulating idarucizumab after a 5 g dose. Pre‐existing ADA had no impact on dose‐normalized idarucizumab maximum plasma levels and exposure and, although data were limited, no impact on the reversal of dabigatran‐induced anticoagulation by idarucizumab. Treatment‐emergent ADA were detected in 20 individuals (19 out of 224 treated [8.5%]; 1 out of 59 received placebo [1.7%]) and were transient in ten. The majority had specificity primarily toward the C‐terminus of idarucizumab. There were no adverse events indicative of immunogenic reactions. Conclusion Pre‐existing and treatment‐emergent ADA were present at extremely low levels relative to the idarucizumab dosage under evaluation. The PK/PD of idarucizumab appeared to be unaffected by the presence of pre‐existing ADA. PMID:28230262

  17. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A new gradient shimming method based on undistorted field map of B0 inhomogeneity.

    PubMed

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.

  20. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  1. Comparison of Two Methods for the Isolation of Salmonellae From Imported Foods

    PubMed Central

    Taylor, Welton I.; Hobbs, Betty C.; Smith, Muriel E.

    1964-01-01

    Two methods for the detection of salmonellae in foods were compared in 179 imported meat and egg samples. The number of positive samples and replications, and the number of strains and kinds of serotypes were statistically comparable by both the direct enrichment method of the Food Hygiene Laboratory in England, and the pre-enrichment method devised for processed foods in the United States. Boneless frozen beef, veal, and horsemeat imported from five countries for consumption in England were found to have salmonellae present in 48 of 116 (41%) samples. Dried egg products imported from three countries were observed to have salmonellae in 10 of 63 (16%) samples. The high incidence of salmonellae isolated from imported foods illustrated the existence of an international health hazard resulting from the continuous introduction of exogenous strains of pathogenic microorganisms on a large scale. PMID:14106941

  2. Airfoil self-noise and prediction

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Pope, D. Stuart; Marcolini, Michael A.

    1989-01-01

    A prediction method is developed for the self-generated noise of an airfoil blade encountering smooth flow. The prediction methods for the individual self-noise mechanisms are semiempirical and are based on previous theoretical studies and data obtained from tests of two- and three-dimensional airfoil blade sections. The self-noise mechanisms are due to specific boundary-layer phenomena, that is, the boundary-layer turbulence passing the trailing edge, separated-boundary-layer and stalled flow over an airfoil, vortex shedding due to laminar boundary layer instabilities, vortex shedding from blunt trailing edges, and the turbulent vortex flow existing near the tip of lifting blades. The predictions are compared successfully with published data from three self-noise studies of different airfoil shapes. An application of the prediction method is reported for a large scale-model helicopter rotor, and the predictions compared well with experimental broadband noise measurements. A computer code of the method is given.

  3. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  4. Estimating dietary costs of low-income women in California: a comparison of 2 approaches123

    PubMed Central

    Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam

    2013-01-01

    Background: Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. Objective: The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Design: Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Results: Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Conclusions: Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher. PMID:23388658

  5. Melasma and its association with different types of nevi in women: A case-control study

    PubMed Central

    Adalatkhah, Hassan; Sadeghi-bazargani, Homayoun; Amini-sani, Nayereh; Zeynizadeh, Somayeh

    2008-01-01

    Background Very little is known about possible association of nevi and melasma. The study objective was to determine if there is an association between melasma and existence of different kinds of nevi. Methods In a case-control study, 120 female melasma patients referred to dermatology clinic of Ardabil and 120 patients referred to other specialty clinics who lacked melasma were enrolled after matching for age. Number of different types of nevi including lentigines and melanocytic nevi were compared between case and control group patients. Data were entered into the computer and analyzed by SPSS 13 statistical software. Results Mean number of lentigines was 25.5 in melasma group compared to 8 in control group(P < 0.01). Mean number of melanocytic nevi was 13.2 in cases compared to 2.8 in control group(P < 0.001). Multivariate analysis showed that existence of freckles, lentigines and more than three melanocytic nevi were positively related to developing melasma. The chance of melasma increased up to 23 times for patients having more than three melanocytic nevi. Congenital nevi were observed among 10% both in case and control groups. Campbell de morgan angiomas were seen among 26 patients(21.8%) in case group compared to 6 patients(5%) in control group. Conclusion Existence of lentigines and melanocytic nevi increases chance of having melasma PMID:18680608

  6. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  7. Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu

    2015-01-01

    Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908

  8. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  9. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  10. A Fast and Effective Pyridine-Free Method for the Determination of Hydroxyl Value of Hydroxyl-Terminated Polybutadiene and Other Hydroxy Compounds

    NASA Astrophysics Data System (ADS)

    Alex, Ancy Smitha; Kumar, Vijendra; Sekkar, V.; Bandyopadhyay, G. G.

    2017-07-01

    Hydroxyl-terminated polybutadiene (HTPB) is the workhorse propellant binder for launch vehicle and missile applications. Accurate determination of the hydroxyl value (OHV) of HTPB is crucial for tailoring the ultimate mechanical and ballistic properties of the propellant derived. This article describes a fast and effective methodology free of pyridine based on acetic anhydride, N-methyl imidazole, and toluene for the determination of OHV of nonpolar polymers like HTPB and other hydroxyl compounds. This method gives accurate and reproducible results comparable to standard methods and is superior to existing methods in terms of user friendliness, efficiency, and time requirement.

  11. Acoustic contrast control in an arc-shaped area using a linear loudspeaker array.

    PubMed

    Zhao, Sipei; Qiu, Xiaojun; Burnett, Ian

    2015-02-01

    This paper proposes a method of creating acoustic contrast control in an arc-shaped area using a linear loudspeaker array. The boundary of the arc-shaped area is treated as the envelope of the tangent lines that can be formed by manipulating the phase profile of the loudspeakers in the array. When compared with the existing acoustic contrast control method, the proposed method is able to generate sound field inside an arc-shaped area and achieve a trade-off between acoustic uniformity and acoustic contrast. The acoustic contrast created by the proposed method increases while the acoustic uniformity decreases with frequency.

  12. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery

    PubMed Central

    Huo, Zhiguang; Tseng, George

    2017-01-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K-means (is-K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is-K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency. PMID:28959370

  13. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery.

    PubMed

    Huo, Zhiguang; Tseng, George

    2017-06-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

  14. Perceptions and receptivity of non-spousal family support: A mixed methods study of psychological distress among older, church-going African American men

    PubMed Central

    Watkins, Daphne C.; Wharton, Tracy; Mitchell, Jamie A.; Matusko, Niki; Kales, Helen

    2016-01-01

    The purpose of this study was to explore the role of non-spousal family support on mental health among older, church-going African American men. The mixed methods objective was to employ a design that used existing qualitative and quantitative data to explore the interpretive context within which social and cultural experiences occur. Qualitative data (n=21) were used to build a conceptual model that was tested using quantitative data (n= 401). Confirmatory factor analysis indicated an inverse association between non-spousal family support and distress. The comparative fit index, Tucker-Lewis fit index, and root mean square error of approximation indicated good model fit. This study offers unique methodological approaches to using existing, complementary data sources to understand the health of African American men. PMID:28943829

  15. A simple and rapid method for isolation of high quality genomic DNA from fruit trees and conifers using PVP.

    PubMed

    Kim, C S; Lee, C H; Shin, J S; Chung, Y S; Hyung, N I

    1997-03-01

    Because DNA degradation is mediated by secondary plant products such as phenolic terpenoids, the isolation of high quality DNA from plants containing a high content of polyphenolics has been a difficult problem. We demonstrate an easy extraction process by modifying several existing ones. Using this process we have found it possible to isolate DNAs from four fruit trees, grape (Vitis spp.), apple (Malus spp.), pear (Pyrus spp.) and persimmon (Diospyros spp.) and four species of conifer, Pinus densiflora, Pinus koraiensis,Taxus cuspidata and Juniperus chinensis within a few hours. Compared with the existing method, we have isolated high quality intact DNAs (260/280 = 1.8-2.0) routinely yielding 250-500 ng/microl (total 7.5-15 microg DNA from four to five tissue discs).

  16. A simple and rapid method for isolation of high quality genomic DNA from fruit trees and conifers using PVP.

    PubMed Central

    Kim, C S; Lee, C H; Shin, J S; Chung, Y S; Hyung, N I

    1997-01-01

    Because DNA degradation is mediated by secondary plant products such as phenolic terpenoids, the isolation of high quality DNA from plants containing a high content of polyphenolics has been a difficult problem. We demonstrate an easy extraction process by modifying several existing ones. Using this process we have found it possible to isolate DNAs from four fruit trees, grape (Vitis spp.), apple (Malus spp.), pear (Pyrus spp.) and persimmon (Diospyros spp.) and four species of conifer, Pinus densiflora, Pinus koraiensis,Taxus cuspidata and Juniperus chinensis within a few hours. Compared with the existing method, we have isolated high quality intact DNAs (260/280 = 1.8-2.0) routinely yielding 250-500 ng/microl (total 7.5-15 microg DNA from four to five tissue discs). PMID:9023124

  17. The Impact of Symptoms and Impairments on Overall Health in US National Health Data

    PubMed Central

    Stewart, Susan T.; Woodward, Rebecca M.; Rosen, Allison B.; Cutler, David M.

    2015-01-01

    Objective To assess the effects on overall self-rated health of the broad range of symptoms and impairments that are routinely asked about in national surveys. Data We use data from adults in the nationally representative Medical Expenditure Panel Survey (MEPS) 2002 with validation in an independent sample from MEPS 2000. Methods Regression analysis is used to relate impairments and symptoms to a 100-point self-rating of general health status. The effect of each impairment and symptom on health-related quality of life (HRQOL) is estimated from regression coefficients, accounting for interactions between them. Results Impairments and symptoms most strongly associated with overall health include pain, self-care limitations, and having little or no energy. The most prevalent are moderate pain, severe anxiety, moderate depressive symptoms, and low energy. Effects are stable across different waves of MEPS, and questions cover a broader range of impairments and symptoms than existing health measurement instruments. Conclusions This method makes use of the rich detail on impairments and symptoms in existing national data, quantifying their independent effects on overall health. Given the ongoing availability of these data and the shortcomings of traditional utility methods, it would be valuable to compare existing HRQOL measures to other methods, such as the one presented herein, for use in tracking population health over time. PMID:18725850

  18. On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hsieh, Shih-Fu

    1990-01-01

    In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.

  19. Comparative analysis of hierarchical triangulated irregular networks to represent 3D elevation in terrain databases

    NASA Astrophysics Data System (ADS)

    Abdelguerfi, Mahdi; Wynne, Chris; Cooper, Edgar; Ladner, Roy V.; Shaw, Kevin B.

    1997-08-01

    Three-dimensional terrain representation plays an important role in a number of terrain database applications. Hierarchical triangulated irregular networks (TINs) provide a variable-resolution terrain representation that is based on a nested triangulation of the terrain. This paper compares and analyzes existing hierarchical triangulation techniques. The comparative analysis takes into account how aesthetically appealing and accurate the resulting terrain representation is. Parameters, such as adjacency, slivers, and streaks, are used to provide a measure on how aesthetically appealing the terrain representation is. Slivers occur when the triangulation produces thin and slivery triangles. Streaks appear when there are too many triangulations done at a given vertex. Simple mathematical expressions are derived for these parameters, thereby providing a fairer and a more easily duplicated comparison. In addition to meeting the adjacency requirement, an aesthetically pleasant hierarchical TINs generation algorithm is expected to reduce both slivers and streaks while maintaining accuracy. A comparative analysis of a number of existing approaches shows that a variant of a method originally proposed by Scarlatos exhibits better overall performance.

  20. A Randomized Clinical Trial of Acceptance and Commitment Therapy versus Progressive Relaxation Training for Obsessive-Compulsive Disorder

    ERIC Educational Resources Information Center

    Twohig, Michael P.; Hayes, Steven C.; Plumb, Jennifer C.; Pruitt, Larry D.; Collins, Angela B.; Hazlett-Stevens, Holly; Woidneck, Michelle R.

    2010-01-01

    Objective: Effective treatments for obsessive-compulsive disorder (OCD) exist, but additional treatment options are needed. The effectiveness of 8 sessions of acceptance and commitment therapy (ACT) for adult OCD was compared with progressive relaxation training (PRT). Method: Seventy-nine adults (61% female) diagnosed with OCD (mean age = 37…

  1. Education Level and Critical Thinking Skills among Substance Use Counselors Nationwide: A Descriptive Comparative Study

    ERIC Educational Resources Information Center

    Eakman, Teresa L.

    2017-01-01

    As a high percentage of substance use counselors are in recovery, using adult learning methods such as constructivism and transformational learning are needed to neutralize any preestablished views of treatment modalities that may exist, as well as combat any possible issues of countertransference. Teaching critical thinking leads to student…

  2. Within-Subject Comparison of Changes in a Pretest-Posttest Design

    ERIC Educational Resources Information Center

    Hennig, Christian; Mullensiefen, Daniel; Bargmann, Jens

    2010-01-01

    The authors propose a method to compare the influence of a treatment on different properties within subjects. The properties are measured by several Likert-type-scaled items. The results show that many existing approaches, such as repeated measurement analysis of variance on sum and mean scores, a linear partial credit model, and a graded response…

  3. Fuel cell flooding detection and correction

    DOEpatents

    DiPierno Bosco, Andrew; Fronk, Matthew Howard

    2000-08-15

    Method and apparatus for monitoring an H.sub.2 -O.sub.2 PEM fuel cells to detect and correct flooding. The pressure drop across a given H.sub.2 or O.sub.2 flow field is monitored and compared to predetermined thresholds of unacceptability. If the pressure drop exists a threshold of unacceptability corrective measures are automatically initiated.

  4. Comparing Fears in South African Children with and without Visual Impairments

    ERIC Educational Resources Information Center

    Visagie, Lisa; Loxton, Helene; Ollendick, Thomas H.; Steel, Henry

    2013-01-01

    Introduction: The aim of the study presented here was to determine whether significant differences exist between the fear profiles of South African children in middle childhood (aged 8-13) with different levels of visual impairments and those of their sighted counterparts. Methods: A differential research design was used, and a total of 129…

  5. Measuring Adult Literacy in Health Care: Performance of the Newest Vital Sign

    ERIC Educational Resources Information Center

    Osborn, Chandra Y.; Weiss, Barry D.; Davis, Terry C.; Skripkauskas, Silvia; Rodrigue, Christopher; Bass, Pat F., III; Wolf, Michael S.

    2007-01-01

    Objective: To compare performance of the newest vital sign (NVS) with existing literacy measures. Methods: We administered the NVS and REALM to 129 patients, and NVS and S-TOFHLA to 119 patients all in public clinics. Results: The NVS demonstrated high sensitivity for detecting limited literacy and moderate specificity (area under the receiver…

  6. Relationships between Objective and Perceived Housing in Very Old Age

    ERIC Educational Resources Information Center

    Nygren, Carita; Oswald, Frank; Iwarsson, Susanne; Fange, Agneta; Sixsmith, Judith; Schilling, Oliver; Sixsmith, Andrew; Szeman, Zsuzsa; Tomsone, Signe; Wahl, Hans-Werner

    2007-01-01

    Purpose: Our purpose in this study was to explore relationships between aspects of objective and perceived housing in five European samples of very old adults, as well as to investigate whether cross-national comparable patterns exist. Design and Methods: We utilized data from the first wave of the ENABLE-AGE Survey Study. The five national…

  7. A Randomized Effectiveness Trial of Brief Parent Training: Six-Month Follow-Up

    ERIC Educational Resources Information Center

    Kjøbli, John; Bjørnebekk, Gunnar

    2013-01-01

    Objective: To examine the follow-up effectiveness of brief parent training (BPT) for children with emerging or existing conduct problems. Method: With the use of a randomized controlled trial and parent and teacher reports, this study examined the effectiveness of BPT compared to regular services 6 months after the end of the intervention.…

  8. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  9. Time allocation and cultural complexity: leisure time use across twelve cultures

    Treesearch

    Garry Chick; Sharon Xiangyou Shen

    2008-01-01

    This study is part of an effort to understand the effect of cultural evolution on leisure time through comparing time use across 12 cultures. We used an existing dataset initially collected by researchers affiliated with the UCLA Time Allocation Project (1987-1997), which contains behavioral data coded with standard methods from twelve native lowland Amazonian...

  10. 76 FR 10600 - Medicare Program; Public Meeting in Calendar Year 2011 for New Clinical Laboratory Tests Payment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-25

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services [CMS-1347-N... method called ``cross-walking'' is used when a new test is determined to be comparable to an existing... either cross-walk or gap-fill. II. Format This meeting to receive comments and recommendations (including...

  11. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    PubMed

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  12. Highly comparative time-series analysis: the empirical structure of time series and their methods

    PubMed Central

    Fulcher, Ben D.; Little, Max A.; Jones, Nick S.

    2013-01-01

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344

  13. A transition-based joint model for disease named entity recognition and normalization.

    PubMed

    Lou, Yinxia; Zhang, Yue; Qian, Tao; Li, Fei; Xiong, Shufeng; Ji, Donghong

    2017-08-01

    Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. Data and code are available at https://github.com/louyinxia/jointRN. dhji@whu.edu.cn. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. Microbiome and pancreatic cancer: A comprehensive topic review of literature

    PubMed Central

    Ertz-Archambault, Natalie; Keim, Paul; Von Hoff, Daniel

    2017-01-01

    AIM To review microbiome alterations associated with pancreatic cancer, its potential utility in diagnostics, risk assessment, and influence on disease outcomes. METHODS A comprehensive literature review was conducted by all-inclusive topic review from PubMed, MEDLINE, and Web of Science. The last search was performed in October 2016. RESULTS Diverse microbiome alterations exist among several body sites including oral, gut, and pancreatic tissue, in patients with pancreatic cancer compared to healthy populations. CONCLUSION Pilot study successes in non-invasive screening strategies warrant further investigation for future translational application in early diagnostics and to learn modifiable risk factors relevant to disease prevention. Pre-clinical investigations exist in other tumor types that suggest microbiome manipulation provides opportunity to favorably transform cancer response to existing treatment protocols and improve survival. PMID:28348497

  15. Stochastic functional evolution equations with monotone nonlinearity: Existence and stability of the mild solutions

    NASA Astrophysics Data System (ADS)

    Jahanipur, Ruhollah

    In this paper, we study a class of semilinear functional evolution equations in which the nonlinearity is demicontinuous and satisfies a semimonotone condition. We prove the existence, uniqueness and exponentially asymptotic stability of the mild solutions. Our approach is to apply a convenient version of Burkholder inequality for convolution integrals and an iteration method based on the existence and measurability results for the functional integral equations in Hilbert spaces. An Itô-type inequality is the main tool to study the uniqueness, p-th moment and almost sure sample path asymptotic stability of the mild solutions. We also give some examples to illustrate the applications of the theorems and meanwhile we compare the results obtained in this paper with some others appeared in the literature.

  16. Interpolation of orientation distribution functions in diffusion weighted imaging using multi-tensor model.

    PubMed

    Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2015-09-30

    Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A generative model for segmentation of tumor and organs-at-risk for radiation therapy planning of glioblastoma patients

    NASA Astrophysics Data System (ADS)

    Agn, Mikael; Law, Ian; Munck af Rosenschöld, Per; Van Leemput, Koen

    2016-03-01

    We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstem and hippocampus. The preliminary results demonstrate the feasibility of the method.

  18. Evaluation of Sub Query Performance in SQL Server

    NASA Astrophysics Data System (ADS)

    Oktavia, Tanty; Sujarwo, Surya

    2014-03-01

    The paper explores several sub query methods used in a query and their impact on the query performance. The study uses experimental approach to evaluate the performance of each sub query methods combined with indexing strategy. The sub query methods consist of in, exists, relational operator and relational operator combined with top operator. The experimental shows that using relational operator combined with indexing strategy in sub query has greater performance compared with using same method without indexing strategy and also other methods. In summary, for application that emphasized on the performance of retrieving data from database, it better to use relational operator combined with indexing strategy. This study is done on Microsoft SQL Server 2012.

  19. A Generalized Weizsacker-Williams Method Applied to Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Ahern, Sean C.; Poyser, William J.; Norbury, John W.; Tripathi, R. K.

    2002-01-01

    A new "Generalized" Weizsacker-Williams method (GWWM) is used to calculate approximate cross sections for relativistic peripheral proton-proton collisions. Instead of a mass less photon mediator, the method allows for the mediator to have mass for short range interactions. This method generalizes the Weizsacker-Williams method (WWM) from Coulomb interactions to GWWM for strong interactions. An elastic proton-proton cross section is calculated using GWWM with experimental data for the elastic p+p interaction, where the mass p+ is now the mediator. The resulting calculated cross sections is compared to existing data for the elastic proton-proton interaction. A good approximate fit is found between the data and the calculation.

  20. An oscillatory kernel function method for lifting surfaces in mixed transonic flow

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    A study was conducted on the use of combined subsonic and supersonic linear theory to obtain economical and yet realistic solutions to unsteady transonic flow problems. With some modification, existing linear theory methods were combined into a single computer program. The method was applied to problems for which measured steady Mach number distributions and unsteady pressure distributions were available. By comparing theory and experiment, the transonic method showed a significant improvement over uniform flow methods. The results also indicated that more exact local Mach number effects and normal shock boundary conditions on the perturbation potential were needed. The validity of these improvements was demonstrated by application to steady flow.

  1. CoMet: a workflow using contig coverage and composition for binning a metagenomic sample with high precision.

    PubMed

    Herath, Damayanthi; Tang, Sen-Lin; Tandon, Kshitij; Ackland, David; Halgamuge, Saman Kumara

    2017-12-28

    In metagenomics, the separation of nucleotide sequences belonging to an individual or closely matched populations is termed binning. Binning helps the evaluation of underlying microbial population structure as well as the recovery of individual genomes from a sample of uncultivable microbial organisms. Both supervised and unsupervised learning methods have been employed in binning; however, characterizing a metagenomic sample containing multiple strains remains a significant challenge. In this study, we designed and implemented a new workflow, Coverage and composition based binning of Metagenomes (CoMet), for binning contigs in a single metagenomic sample. CoMet utilizes coverage values and the compositional features of metagenomic contigs. The binning strategy in CoMet includes the initial grouping of contigs in guanine-cytosine (GC) content-coverage space and refinement of bins in tetranucleotide frequencies space in a purely unsupervised manner. With CoMet, the clustering algorithm DBSCAN is employed for binning contigs. The performances of CoMet were compared against four existing approaches for binning a single metagenomic sample, including MaxBin, Metawatt, MyCC (default) and MyCC (coverage) using multiple datasets including a sample comprised of multiple strains. Binning methods based on both compositional features and coverages of contigs had higher performances than the method which is based only on compositional features of contigs. CoMet yielded higher or comparable precision in comparison to the existing binning methods on benchmark datasets of varying complexities. MyCC (coverage) had the highest ranking score in F1-score. However, the performances of CoMet were higher than MyCC (coverage) on the dataset containing multiple strains. Furthermore, CoMet recovered contigs of more species and was 18 - 39% higher in precision than the compared existing methods in discriminating species from the sample of multiple strains. CoMet resulted in higher precision than MyCC (default) and MyCC (coverage) on a real metagenome. The approach proposed with CoMet for binning contigs, improves the precision of binning while characterizing more species in a single metagenomic sample and in a sample containing multiple strains. The F1-scores obtained from different binning strategies vary with different datasets; however, CoMet yields the highest F1-score with a sample comprised of multiple strains.

  2. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    NASA Technical Reports Server (NTRS)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  3. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    PubMed

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  4. Confidence intervals for a difference between lognormal means in cluster randomization trials.

    PubMed

    Poirier, Julia; Zou, G Y; Koval, John

    2017-04-01

    Cluster randomization trials, in which intact social units are randomized to different interventions, have become popular in the last 25 years. Outcomes from these trials in many cases are positively skewed, following approximately lognormal distributions. When inference is focused on the difference between treatment arm arithmetic means, existent confidence interval procedures either make restricting assumptions or are complex to implement. We approach this problem by assuming log-transformed outcomes from each treatment arm follow a one-way random effects model. The treatment arm means are functions of multiple parameters for which separate confidence intervals are readily available, suggesting that the method of variance estimates recovery may be applied to obtain closed-form confidence intervals. A simulation study showed that this simple approach performs well in small sample sizes in terms of empirical coverage, relatively balanced tail errors, and interval widths as compared to existing methods. The methods are illustrated using data arising from a cluster randomization trial investigating a critical pathway for the treatment of community acquired pneumonia.

  5. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  6. On Federated and Proof Of Validation Based Consensus Algorithms In Blockchain

    NASA Astrophysics Data System (ADS)

    Ambili, K. N.; Sindhu, M.; Sethumadhavan, M.

    2017-08-01

    Almost all real world activities have been digitized and there are various client server architecture based systems in place to handle them. These are all based on trust on third parties. There is an active attempt to successfully implement blockchain based systems which ensures that the IT systems are immutable, double spending is avoided and cryptographic strength is provided to them. A successful implementation of blockchain as backbone of existing information technology systems is bound to eliminate various types of fraud and ensure quicker delivery of the item on trade. To adapt IT systems to blockchain architecture, an efficient consensus algorithm need to be designed. Blockchain based on proof of work first came up as the backbone of cryptocurrency. After this, several other methods with variety of interesting features have come up. In this paper, we conduct a survey on existing attempts to achieve consensus in block chain. A federated consensus method and a proof of validation method are being compared.

  7. Reevaluation of air surveillance station siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    2016-07-06

    DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less

  8. Segmentation of Image Ensembles via Latent Atlases

    PubMed Central

    Van Leemput, Koen; Menze, Bjoern H.; Wells, William M.; Golland, Polina

    2010-01-01

    Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented. PMID:20580305

  9. Techniques of Acceleration for Association Rule Induction with Pseudo Artificial Life Algorithm

    NASA Astrophysics Data System (ADS)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    Frequent patterns mining is one of the important problems in data mining. Generally, the number of potential rules grows rapidly as the size of database increases. It is therefore hard for a user to extract the association rules. To avoid such a difficulty, we propose a new method for association rule induction with pseudo artificial life approach. The proposed method is to decide whether there exists an item set which contains N or more items in two transactions. If it exists, a series of item sets which are contained in the part of transactions will be recorded. The iteration of this step contributes to the extraction of association rules. It is not necessary to calculate the huge number of candidate rules. In the evaluation test, we compared the extracted association rules using our method with the rules using other algorithms like Apriori algorithm. As a result of the evaluation using huge retail market basket data, our method is approximately 10 and 20 times faster than the Apriori algorithm and many its variants.

  10. Classification of hyperspectral imagery with neural networks: comparison to conventional tools

    NASA Astrophysics Data System (ADS)

    Merényi, Erzsébet; Farrand, William H.; Taranik, James V.; Minor, Timothy B.

    2014-12-01

    Efficient exploitation of hyperspectral imagery is of great importance in remote sensing. Artificial intelligence approaches have been receiving favorable reviews for classification of hyperspectral data because the complexity of such data challenges the limitations of many conventional methods. Artificial neural networks (ANNs) were shown to outperform traditional classifiers in many situations. However, studies that use the full spectral dimensionality of hyperspectral images to classify a large number of surface covers are scarce if non-existent. We advocate the need for methods that can handle the full dimensionality and a large number of classes to retain the discovery potential and the ability to discriminate classes with subtle spectral differences. We demonstrate that such a method exists in the family of ANNs. We compare the maximum likelihood, Mahalonobis distance, minimum distance, spectral angle mapper, and a hybrid ANN classifier for real hyperspectral AVIRIS data, using the full spectral resolution to map 23 cover types and using a small training set. Rigorous evaluation of the classification accuracies shows that the ANN outperforms the other methods and achieves ≈90% accuracy on test data.

  11. Reporting Qualitative Research: Standards, Challenges, and Implications for Health Design.

    PubMed

    Peditto, Kathryn

    2018-04-01

    This Methods column describes the existing reporting standards for qualitative research, their application to health design research, and the challenges to implementation. Intended for both researchers and practitioners, this article provides multiple perspectives on both reporting and evaluating high-quality qualitative research. Two popular reporting standards exist for reporting qualitative research-the Consolidated Criteria for Reporting Qualitative Research (COREQ) and the Standards for Reporting Qualitative Research (SRQR). Though compiled using similar procedures, they differ in their criteria and the methods to which they apply. Creating and applying reporting criteria is inherently difficult due to the undefined and fluctuating nature of qualitative research when compared to quantitative studies. Qualitative research is expansive and occasionally controversial, spanning many different methods of inquiry and epistemological approaches. A "one-size-fits-all" standard for reporting qualitative research can be restrictive, but COREQ and SRQR both serve as valuable tools for developing responsible qualitative research proposals, effectively communicating research decisions, and evaluating submissions. Ultimately, tailoring a set of standards specific to health design research and its frequently used methods would ensure quality research and aid reviewers in their evaluations.

  12. Comparison of several methods for estimating low speed stability derivatives

    NASA Technical Reports Server (NTRS)

    Fletcher, H. S.

    1971-01-01

    Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.

  13. Radar analysis of free oscillations of rail for diagnostics defects

    NASA Astrophysics Data System (ADS)

    Shaydurov, G. Y.; Kudinov, D. S.; Kokhonkova, E. A.; Potylitsyn, V. S.

    2018-05-01

    One of the tasks of developing and implementing defectoscopy devices is the minimal influence of the human factor in their exploitation. At present, rail inspection systems do not have sufficient depth of rail research, and ultrasonic diagnostics systems need to contact the sensor with the surface being studied, which leads to low productivity. The article gives a comparative analysis of existing noncontact methods of flaw detection, offers a contactless method of diagnostics by excitation of acoustic waves and extraction of information about defects from the frequency of free rail oscillations using the radar method.

  14. Computational methods for vortex dominated compressible flows

    NASA Technical Reports Server (NTRS)

    Murman, Earll M.

    1987-01-01

    The principal objectives were to: understand the mechanisms by which Euler equation computations model leading edge vortex flows; understand the vortical and shock wave structures that may exist for different wing shapes, angles of incidence, and Mach numbers; and compare calculations with experiments in order to ascertain the limitations and advantages of Euler equation models. The initial approach utilized the cell centered finite volume Jameson scheme. The final calculation utilized a cell vertex finite volume method on an unstructured grid. Both methods used Runge-Kutta four stage schemes for integrating the equations. The principal findings are briefly summarized.

  15. Robust clustering of languages across Wikipedia growth

    NASA Astrophysics Data System (ADS)

    Ban, Kristina; Perc, Matjaž; Levnajić, Zoran

    2017-10-01

    Wikipedia is the largest existing knowledge repository that is growing on a genuine crowdsourcing support. While the English Wikipedia is the most extensive and the most researched one with over 5 million articles, comparatively little is known about the behaviour and growth of the remaining 283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here, we use a subset of these data, consisting of 14 962 different articles, each of which exists in 26 different languages, from Arabic to Ukrainian. We study the growth of Wikipedias in these languages over a time span of 15 years. We show that, while an average article follows a random path from one language to another, there exist six well-defined clusters of Wikipedias that share common growth patterns. The make-up of these clusters is remarkably robust against the method used for their determination, as we verify via four different clustering methods. Interestingly, the identified Wikipedia clusters have little correlation with language families and groups. Rather, the growth of Wikipedia across different languages is governed by different factors, ranging from similarities in culture to information literacy.

  16. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  17. Robust clustering of languages across Wikipedia growth.

    PubMed

    Ban, Kristina; Perc, Matjaž; Levnajić, Zoran

    2017-10-01

    Wikipedia is the largest existing knowledge repository that is growing on a genuine crowdsourcing support. While the English Wikipedia is the most extensive and the most researched one with over 5 million articles, comparatively little is known about the behaviour and growth of the remaining 283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here, we use a subset of these data, consisting of 14 962 different articles, each of which exists in 26 different languages, from Arabic to Ukrainian. We study the growth of Wikipedias in these languages over a time span of 15 years. We show that, while an average article follows a random path from one language to another, there exist six well-defined clusters of Wikipedias that share common growth patterns. The make-up of these clusters is remarkably robust against the method used for their determination, as we verify via four different clustering methods. Interestingly, the identified Wikipedia clusters have little correlation with language families and groups. Rather, the growth of Wikipedia across different languages is governed by different factors, ranging from similarities in culture to information literacy.

  18. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images.

    PubMed

    Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao

    2018-03-01

    We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.

  19. Robust clustering of languages across Wikipedia growth

    PubMed Central

    Ban, Kristina; Levnajić, Zoran

    2017-01-01

    Wikipedia is the largest existing knowledge repository that is growing on a genuine crowdsourcing support. While the English Wikipedia is the most extensive and the most researched one with over 5 million articles, comparatively little is known about the behaviour and growth of the remaining 283 smaller Wikipedias, the smallest of which, Afar, has only one article. Here, we use a subset of these data, consisting of 14 962 different articles, each of which exists in 26 different languages, from Arabic to Ukrainian. We study the growth of Wikipedias in these languages over a time span of 15 years. We show that, while an average article follows a random path from one language to another, there exist six well-defined clusters of Wikipedias that share common growth patterns. The make-up of these clusters is remarkably robust against the method used for their determination, as we verify via four different clustering methods. Interestingly, the identified Wikipedia clusters have little correlation with language families and groups. Rather, the growth of Wikipedia across different languages is governed by different factors, ranging from similarities in culture to information literacy. PMID:29134106

  20. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Earthquake recording at the Stanford DAS Array with fibers in existing telecomm conduits

    NASA Astrophysics Data System (ADS)

    Biondi, B. C.; Martin, E. R.; Yuan, S.; Cole, S.; Karrenbach, M. H.

    2017-12-01

    The Stanford Distributed Acoustic Sensing Array (SDASA-1) has been continuously recording seismic data since September 2016 on 2.5 km of single mode fiber optics in existing telecommunications conduits under Stanford's campus. The array is figure-eight shaped and roughly 600 m along its widest side with a channel spacing of roughly 8 m. This array is easy to maintain and is nonintrusive, making it well suited to urban environments, but it sacrifices some cable-to-ground coupling compared to more traditional seismometers. We have been testing its utility for earthquake recording, active seismic, and ambient noise interferometry. This talk will focus on earthquake observations. We will show comparisons between the strain rates measured throughout the DAS array and the particle velocities measured at the nearby Jasper Ridge Seismic Station (JRSC). In some of these events, we will point out directionality features specific to DAS that can require slight modifications in data processing. We also compare repeatability of DAS and JRSC recordings of blasts from a nearby quarry. Using existing earthquake databases, we have created a small catalog of DAS earthquake observations by pulling records of over 700 Northern California events spanning Sep. 2016 to Jul. 2017 from both the DAS data and JRSC. On these events we have tested common array methods for earthquake detection and location including beamforming and STA/LTA analysis in time and frequency. We have analyzed these events to approximate thresholds on what distances and magnitudes are clearly detectible by the DAS array. Further analysis should be done on detectability with methods tailored to small events (for example, template matching). In creating this catalog, we have developed open source software available for free download that can manage large sets of continuous seismic data files (both existing files, and files as they stream in). This software can both interface with existing earthquake networks, and efficiently extract earthquake recordings from many continuous recordings saved on the users machines.

  2. Incorporating spatial constraint in co-activation pattern analysis to explore the dynamics of resting-state networks: An application to Parkinson's disease.

    PubMed

    Zhuang, Xiaowei; Walsh, Ryan R; Sreenivasan, Karthik; Yang, Zhengshi; Mishra, Virendra; Cordes, Dietmar

    2018-05-15

    The dynamics of the brain's intrinsic networks have been recently studied using co-activation pattern (CAP) analysis. The CAP method relies on few model assumptions and CAP-based measurements provide quantitative information of network temporal dynamics. One limitation of existing CAP-related methods is that the computed CAPs share considerable spatial overlap that may or may not be functionally distinct relative to specific network dynamics. To more accurately describe network dynamics with spatially distinct CAPs, and to compare network dynamics between different populations, a novel data-driven CAP group analysis method is proposed in this study. In the proposed method, a dominant-CAP (d-CAP) set is synthesized across CAPs from multiple clustering runs for each group with the constraint of low spatial similarities among d-CAPs. Alternating d-CAPs with less overlapping spatial patterns can better capture overall network dynamics. The number of d-CAPs, the temporal fraction and spatial consistency of each d-CAP, and the subject-specific switching probability among all d-CAPs are then calculated for each group and used to compare network dynamics between groups. The spatial dissimilarities among d-CAPs computed with the proposed method were first demonstrated using simulated data. High consistency between simulated ground-truth and computed d-CAPs was achieved, and detailed comparisons between the proposed method and existing CAP-based methods were conducted using simulated data. In an effort to physiologically validate the proposed technique and investigate network dynamics in a relevant brain network disorder, the proposed method was then applied to data from the Parkinson's Progression Markers Initiative (PPMI) database to compare the network dynamics in Parkinson's disease (PD) and normal control (NC) groups. Fewer d-CAPs, skewed distribution of temporal fractions of d-CAPs, and reduced switching probabilities among final d-CAPs were found in most networks in the PD group, as compared to the NC group. Furthermore, an overall negative association between switching probability among d-CAPs and disease severity was observed in most networks in the PD group as well. These results expand upon previous findings from in vivo electrophysiological recording studies in PD. Importantly, this novel analysis also demonstrates that changes in network dynamics can be measured using resting-state fMRI data from subjects with early stage PD. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Single tree biomass modelling using airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Kankare, Ville; Räty, Minna; Yu, Xiaowei; Holopainen, Markus; Vastaranta, Mikko; Kantola, Tuula; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2013-11-01

    Accurate forest biomass mapping methods would provide the means for e.g. detecting bioenergy potential, biofuel and forest-bound carbon. The demand for practical biomass mapping methods at all forest levels is growing worldwide, and viable options are being developed. Airborne laser scanning (ALS) is a promising forest biomass mapping technique, due to its capability of measuring the three-dimensional forest vegetation structure. The objective of the study was to develop new methods for tree-level biomass estimation using metrics derived from ALS point clouds and to compare the results with field references collected using destructive sampling and with existing biomass models. The study area was located in Evo, southern Finland. ALS data was collected in 2009 with pulse density equalling approximately 10 pulses/m2. Linear models were developed for the following tree biomass components: total, stem wood, living branch and total canopy biomass. ALS-derived geometric and statistical point metrics were used as explanatory variables when creating the models. The total and stem biomass root mean square error per cents equalled 26.3% and 28.4% for Scots pine (Pinus sylvestris L.), and 36.8% and 27.6% for Norway spruce (Picea abies (L.) H. Karst.), respectively. The results showed that higher estimation accuracy for all biomass components can be achieved with models created in this study compared to existing allometric biomass models when ALS-derived height and diameter were used as input parameters. Best results were achieved when adding field-measured diameter and height as inputs in the existing biomass models. The only exceptions to this were the canopy and living branch biomass estimations for spruce. The achieved results are encouraging for the use of ALS-derived metrics in biomass mapping and for further development of the models.

  4. Comparing four non-invasive methods to determine the ventilatory anaerobic threshold during cardiopulmonary exercise testing in children with congenital heart or lung disease.

    PubMed

    Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim

    2015-11-01

    The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  5. ECONOMICS OF INDIVIDUALIZATION IN COMPARATIVE EFFECTIVENESS RESEARCH AND A BASIS FOR A PATIENT-CENTERED HEALTH CARE

    PubMed Central

    Basu, Anirban

    2011-01-01

    The United States aspires to use information from comparative effectiveness research (CER) to reduce waste and contain costs without instituting a formal rationing mechanism or compromising patient or physician autonomy with regard to treatment choices. With such ambitious goals, traditional combinations of research designs and analytical methods used in CER may lead to disappointing results. In this paper, I study how alternate regimes of comparative effectiveness information help shape the marginal benefits (demand) curve in the population and how such perceived demand curves impact decision-making at the individual patient level and welfare at the societal level. I highlight the need to individualize comparative effectiveness research in order to generate the true (normative) demand curve for treatments. I discuss methodological principles that guide research designs for such studies. Using an example of the comparative effect of substance abuse treatments on crime, I use novel econometric methods to salvage individualized information from an existing dataset. PMID:21601299

  6. Economics of individualization in comparative effectiveness research and a basis for a patient-centered health care.

    PubMed

    Basu, Anirban

    2011-05-01

    The United States aspires to use information from comparative effectiveness research (CER) to reduce waste and contain costs without instituting a formal rationing mechanism or compromising patient or physician autonomy with regard to treatment choices. With such ambitious goals, traditional combinations of research designs and analytical methods used in CER may lead to disappointing results. In this paper, I study how alternate regimes of comparative effectiveness information help shape the marginal benefits (demand) curve in the population and how such perceived demand curves impact decision-making at the individual patient level and welfare at the societal level. I highlight the need to individualize comparative effectiveness research in order to generate the true (normative) demand curve for treatments. I discuss methodological principles that guide research designs for such studies. Using an example of the comparative effect of substance abuse treatments on crime, I use novel econometric methods to salvage individualized information from an existing dataset. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Extraction of intracellular protein from Glaciozyma antarctica for proteomics analysis

    NASA Astrophysics Data System (ADS)

    Faizura, S. Nor; Farahayu, K.; Faizal, A. B. Mohd; Asmahani, A. A. S.; Amir, R.; Nazalan, N.; Diba, A. B. Farah; Muhammad, M. Nor; Munir, A. M. Abdul

    2013-11-01

    Two preparation methods of crude extracts of psychrophilic yeast Glaciozyma antarctica were compared in order to obtain a good recovery of intracellular proteins. Extraction with mechanical procedures using sonication was found to be more effective for obtaining good yield compare to alkaline treatment method. The procedure is simple, rapid, and produce better yield. A total of 52 proteins were identified by combining both extraction methods. Most of the proteins identified in this study involves in the metabolic process including glycolysis pathway, pentose phosphate pathway, pyruyate decarboxylation and also urea cyle. Several chaperons were identified including probable cpr1-cyclophilin (peptidylprolyl isomerase), macrolide-binding protein fkbp12 and heat shock proteins which were postulate to accelerate proper protein folding. Characteristic of the fundamental cellular processes inferred from the expressed-proteome highlight the evolutionary and functional complexity existing in this domain of life.

  9. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.

  10. An express method for optimally tuning an analog controller with respect to integral quality criteria

    NASA Astrophysics Data System (ADS)

    Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.

    2014-03-01

    An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.

  11. Robustly detecting differential expression in RNA sequencing data using observation weights

    PubMed Central

    Zhou, Xiaobei; Lindsay, Helen; Robinson, Mark D.

    2014-01-01

    A popular approach for comparing gene expression levels between (replicated) conditions of RNA sequencing data relies on counting reads that map to features of interest. Within such count-based methods, many flexible and advanced statistical approaches now exist and offer the ability to adjust for covariates (e.g. batch effects). Often, these methods include some sort of ‘sharing of information’ across features to improve inferences in small samples. It is important to achieve an appropriate tradeoff between statistical power and protection against outliers. Here, we study the robustness of existing approaches for count-based differential expression analysis and propose a new strategy based on observation weights that can be used within existing frameworks. The results suggest that outliers can have a global effect on differential analyses. We demonstrate the effectiveness of our new approach with real data and simulated data that reflects properties of real datasets (e.g. dispersion-mean trend) and develop an extensible framework for comprehensive testing of current and future methods. In addition, we explore the origin of such outliers, in some cases highlighting additional biological or technical factors within the experiment. Further details can be downloaded from the project website: http://imlspenticton.uzh.ch/robinson_lab/edgeR_robust/. PMID:24753412

  12. Necessary and sufficient liveness condition of GS3PR Petri nets

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Barkaoui, Kamel

    2015-05-01

    Structural analysis is one of the most important and efficient methods to investigate the behaviour of Petri nets. Liveness is a significant behavioural property of Petri nets. Siphons, as structural objects of a Petri net, are closely related to its liveness. Many deadlock control policies for flexible manufacturing systems (FMS) modelled by Petri nets are implemented via siphon control. Most of the existing methods design liveness-enforcing supervisors by adding control places for siphons based on their controllability conditions. To compute a liveness-enforcing supervisor with as much as permissive behaviour, it is both theoretically and practically significant to find an exact controllability condition for siphons. However, the existing conditions, max, max‧, and max″-controllability of siphons are all overly restrictive and generally sufficient only. This paper develops a new condition called max*-controllability of the siphons in generalised systems of simple sequential processes with resources (GS3PR), which are a net subclass that can model many real-world automated manufacturing systems. We show that a GS3PR is live if all its strict minimal siphons (SMS) are max*-controlled. Compared with the existing conditions, i.e., max-, max‧-, and max″-controllability of siphons, max*-controllability of the SMS is not only sufficient but also necessary. An example is used to illustrate the proposed method.

  13. Assessment of the transportation route of oversize and excessive loads in relation to the load-bearing capacity of existing bridges

    NASA Astrophysics Data System (ADS)

    Doležel, Jiří; Novák, Drahomír; Petrů, Jan

    2017-09-01

    Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.

  14. Determining the semantic similarities among Gene Ontology terms.

    PubMed

    Taha, Kamal

    2013-05-01

    We present in this paper novel techniques that determine the semantic relationships among GeneOntology (GO) terms. We implemented these techniques in a prototype system called GoSE, which resides between user application and GO database. Given a set S of GO terms, GoSE would return another set S' of GO terms, where each term in S' is semantically related to each term in S. Most current research is focused on determining the semantic similarities among GO ontology terms based solely on their IDs and proximity to one another in the GO graph structure, while overlooking the contexts of the terms, which may lead to erroneous results. The context of a GO term T is the set of other terms, whose existence in the GO graph structure is dependent on T. We propose novel techniques that determine the contexts of terms based on the concept of existence dependency. We present a stack-based sort-merge algorithm employing these techniques for determining the semantic similarities among GO terms.We evaluated GoSE experimentally and compared it with three existing methods. The results of measuring the semantic similarities among genes in KEGG and Pfam pathways retrieved from the DBGET and Sanger Pfam databases, respectively, have shown that our method outperforms the other three methods in recall and precision.

  15. Sperm Na+, K+-ATPase and Ca2+-ATPase activity: A preliminary study of comparison of swim up and density gradient centrifugation methods for sperm preparation

    NASA Astrophysics Data System (ADS)

    Lestari, Silvia W.; Larasati, Manggiasih D.; Asmarinah, Mansur, Indra G.

    2018-02-01

    As one of the treatment for infertility, the success rate of Intrauterine Insemination (IUI) is still relatively low. Several sperm preparation methods, swim-up (SU) and the density-gradient centrifugation (DGC) are frequently used to select for better sperm quality which also contribute to IUI failure. Sperm selection methods mainly separate the motile from the immotile sperm, eliminating the seminal plasma. The sperm motility involves the structure and function of sperm membrane in maintaining the balance of ion transport system which is regulated by the Na+, K+-ATPase, and Ca2+-ATPase enzymes. This study aims to re-evaluate the efficiency of these methods in selecting for sperm before being used for IUI and based the evaluation on sperm Na+,K+-ATPase and Ca2+-ATPase activities. Fourteen infertile men from couples who underwent IUI were involved in this study. The SU and DGC methods were used for the sperm preparation. Semen analysis was performed based on the reference value of World Health Organization (WHO) 2010. After isolating the membrane fraction of sperms, the Na+, K+-ATPase activity was defined as the difference in the released inorganic phosphate (Pi) with and without the existence of 10 mM ouabain in the reaction, while the Ca2+-ATPase was determined as the difference in Pi contents with and without the existence of 55 µm CaCl2. The prepared sperm demonstrated a higher percentage of motile sperm compared to sperm from the whole semen. Additionally, the percentage of motile sperm of post-DGC showed higher result than the sperm from post-SU. The velocity of sperm showed similar pattern with the percentage of motile sperm, in which the velocity of prepared sperm was higher than the sperm from whole semen. Furthermore, the sperm velocity of post-DGC was higher compared to the sperm from post-SU. The Na+, K+-ATPase activity of prepared sperm was higher compared to whole semen, whereas Na+, K+-ATPase activity in the post DGC was higher than post SU. The Ca2+-ATPase activity of prepared sperm was higher compared to whole semen, whereas Ca2+-ATPase activity in the post DGC was higher than post SU. The SU and the DGC methods were able to perform sperm selection by showing a high result of Na+, K+-ATPase and Ca2+-ATPase activities, moreover DGC method selected the sperm with high activities of both the Na+, K+-ATPase and Ca2+-ATPase better compared to SU method.

  16. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  17. Scoring from Contests

    PubMed Central

    Penn, Elizabeth Maggie

    2014-01-01

    This article presents a new model for scoring alternatives from “contest” outcomes. The model is a generalization of the method of paired comparison to accommodate comparisons between arbitrarily sized sets of alternatives in which outcomes are any division of a fixed prize. Our approach is also applicable to contests between varying quantities of alternatives. We prove that under a reasonable condition on the comparability of alternatives, there exists a unique collection of scores that produces accurate estimates of the overall performance of each alternative and satisfies a well-known axiom regarding choice probabilities. We apply the method to several problems in which varying choice sets and continuous outcomes may create problems for standard scoring methods. These problems include measuring centrality in network data and the scoring of political candidates via a “feeling thermometer.” In the latter case, we also use the method to uncover and solve a potential difficulty with common methods of rescaling thermometer data to account for issues of interpersonal comparability. PMID:24748759

  18. A Low-Storage-Consumption XML Labeling Method for Efficient Structural Information Extraction

    NASA Astrophysics Data System (ADS)

    Liang, Wenxin; Takahashi, Akihiro; Yokota, Haruo

    Recently, labeling methods to extract and reconstruct the structural information of XML data, which are important for many applications such as XPath query and keyword search, are becoming more attractive. To achieve efficient structural information extraction, in this paper we propose C-DO-VLEI code, a novel update-friendly bit-vector encoding scheme, based on register-length bit operations combining with the properties of Dewey Order numbers, which cannot be implemented in other relevant existing schemes such as ORDPATH. Meanwhile, the proposed method also achieves lower storage consumption because it does not require either prefix schema or any reserved codes for node insertion. We performed experiments to evaluate and compare the performance and storage consumption of the proposed method with those of the ORDPATH method. Experimental results show that the execution times for extracting depth information and parent node labels using the C-DO-VLEI code are about 25% and 15% less, respectively, and the average label size using the C-DO-VLEI code is about 24% smaller, comparing with ORDPATH.

  19. CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.

    PubMed

    White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B

    2017-12-28

    The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.

  20. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  1. Active Learning Framework for Non-Intrusive Load Monitoring: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xin

    2016-05-16

    Non-Intrusive Load Monitoring (NILM) is a set of techniques that estimate the electricity usage of individual appliances from power measurements taken at a limited number of locations in a building. One of the key challenges in NILM is having too much data without class labels yet being unable to label the data manually for cost or time constraints. This paper presents an active learning framework that helps existing NILM techniques to overcome this challenge. Active learning is an advanced machine learning method that interactively queries a user for the class label information. Unlike most existing NILM systems that heuristically requestmore » user inputs, the proposed method only needs minimally sufficient information from a user to build a compact and yet highly representative load signature library. Initial results indicate the proposed method can reduce the user inputs by up to 90% while still achieving similar disaggregation performance compared to a heuristic method. Thus, the proposed method can substantially reduce the burden on the user, improve the performance of a NILM system with limited user inputs, and overcome the key market barriers to the wide adoption of NILM technologies.« less

  2. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  3. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  4. An Overview and Empirical Comparison of Distance Metric Learning Methods.

    PubMed

    Moutafis, Panagiotis; Leng, Mengjun; Kakadiaris, Ioannis A

    2016-02-16

    In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.

  5. The effectiveness of ground-penetrating radar surveys in the location of unmarked burial sites in modern cemeteries

    NASA Astrophysics Data System (ADS)

    Fiedler, Sabine; Illich, Bernhard; Berger, Jochen; Graw, Matthias

    2009-07-01

    Ground-penetration radar (GPR) is a geophysical method that is commonly used in archaeological and forensic investigations, including the determination of the exact location of graves. Whilst the method is rapid and does not involve disturbance of the graves, the interpretation of GPR profiles is nevertheless difficult and often leads to incorrect results. Incorrect identifications could hinder criminal investigations and complicate burials in cemeteries that have no information on the location of previously existing graves. In order to increase the number of unmarked graves that are identified, the GPR results need to be verified by comparing them with the soil and vegetation properties of the sites examined. We used a modern cemetery to assess the results obtained with GPR which we then compared with previously obtained tachymetric data and with an excavation of the graves where doubt existed. Certain soil conditions tended to make the application of GPR difficult on occasions, but a rough estimation of the location of the graves was always possible. The two different methods, GPR survey and tachymetry, both proved suitable for correctly determining the exact location of the majority of graves. The present study thus shows that GPR is a reliable method for determining the exact location of unmarked graves in modern cemeteries. However, the method did not allow statements to be made on the stage of decay of the bodies. Such information would assist in deciding what should be done with graves where ineffective degradation creates a problem for reusing graves following the standard resting time of 25 years.

  6. Upscaling permeability for three-dimensional fractured porous rocks with the multiple boundary method

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas

    2018-02-01

    Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.

  7. Policy-Led Comparative Environmental Risk Assessment of Genetically Modified Crops: Testing for Increased Risk Rather Than Profiling Phenotypes Leads to Predictable and Transparent Decision-Making

    PubMed Central

    Raybould, Alan; Macdonald, Phil

    2018-01-01

    We describe two contrasting methods of comparative environmental risk assessment for genetically modified (GM) crops. Both are science-based, in the sense that they use science to help make decisions, but they differ in the relationship between science and policy. Policy-led comparative risk assessment begins by defining what would be regarded as unacceptable changes when the use a particular GM crop replaces an accepted use of another crop. Hypotheses that these changes will not occur are tested using existing or new data, and corroboration or falsification of the hypotheses is used to inform decision-making. Science-led comparative risk assessment, on the other hand, tends to test null hypotheses of no difference between a GM crop and a comparator. The variables that are compared may have little or no relevance to any previously stated policy objective and hence decision-making tends to be ad hoc in response to possibly spurious statistical significance. We argue that policy-led comparative risk assessment is the far more effective method. With this in mind, we caution that phenotypic profiling of GM crops, particularly with omics methods, is potentially detrimental to risk assessment. PMID:29755975

  8. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  9. STAKEHOLDER INVOLVEMENT THROUGHOUT HEALTH TECHNOLOGY ASSESSMENT: AN EXAMPLE FROM PALLIATIVE CARE.

    PubMed

    Brereton, Louise; Wahlster, Philip; Mozygemba, Kati; Lysdahl, Kristin Bakke; Burns, Jake; Polus, Stephanie; Tummers, Marcia; Refolo, Pietro; Sacchini, Dario; Leppert, Wojciech; Chilcott, James; Ingleton, Christine; Gardiner, Clare; Goyder, Elizabeth

    2017-01-01

    Internationally, funders require stakeholder involvement throughout health technology assessment (HTA). We report successes, challenges, and lessons learned from extensive stakeholder involvement throughout a palliative care case study that demonstrates new concepts and methods for HTA. A 5-step "INTEGRATE-HTA Model" developed within the INTEGRATE-HTA project guided the case study. Using convenience or purposive sampling or directly / indirectly identifying and approaching individuals / groups, stakeholders participated in qualitative research or consultation meetings. During scoping, 132 stakeholders, aged ≥ 18 years in seven countries (England, Italy, Germany, The Netherlands, Norway, Lithuania, and Poland), highlighted key issues in palliative care that assisted identification of the intervention and comparator. Subsequently stakeholders in four countries participated in face-face, telephone and / or video Skype meetings to inform evidence collection and / or review assessment results. An applicability assessment to identify contextual and implementation barriers and enablers for the case study findings involved twelve professionals in the three countries. Finally, thirteen stakeholders participated in a mock decision-making meeting in England. Views about the best methods of stakeholder involvement vary internationally. Stakeholders make valuable contributions in all stages of HTA; assisting decision making about interventions, comparators, research questions; providing evidence and insights into findings, gap analyses and applicability assessments. Key challenges exist regarding inclusivity, time, and resource use. Stakeholder involvement is feasible and worthwhile throughout HTA, sometimes providing unique insights. Various methods can be used to include stakeholders, although challenges exist. Recognition of stakeholder expertise and further guidance about stakeholder consultation methods is needed.

  10. A simple method for determining stress intensity factors for a crack in bi-material interface

    NASA Astrophysics Data System (ADS)

    Morioka, Yuta

    Because of violently oscillating nature of stress and displacement fields near the crack tip, it is difficult to obtain stress intensity factors for a crack between two dis-similar media. For a crack in a homogeneous medium, it is a common practice to find stress intensity factors through strain energy release rates. However, individual strain energy release rates do not exist for bi-material interface crack. Hence it is necessary to find alternative methods to evaluate stress intensity factors. Several methods have been proposed in the past. However they involve mathematical complexity and sometimes require additional finite element analysis. The purpose of this research is to develop a simple method to find stress intensity factors in bi-material interface cracks. A finite element based projection method is proposed in the research. It is shown that the projection method yields very accurate stress intensity factors for a crack in isotropic and anisotropic bi-material interfaces. The projection method is also compared to displacement ratio method and energy method proposed by other authors. Through comparison it is found that projection method is much simpler to apply with its accuracy comparable to that of displacement ratio method.

  11. Myocardium tracking via matching distributions.

    PubMed

    Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali

    2009-01-01

    The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not require a training, and consequently, the solution is not bounded to some shape/intensity prior information learned from of a finite training set.

  12. Normal response function method for mass and stiffness matrix updating using complex FRFs

    NASA Astrophysics Data System (ADS)

    Pradhan, S.; Modak, S. V.

    2012-10-01

    Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.

  13. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  14. A comparative analysis of biclustering algorithms for gene expression data

    PubMed Central

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  15. Matrix elements for type 1 unitary irreducible representations of the Lie superalgebra gl(m|n)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gould, Mark D.; Isaac, Phillip S.; Werry, Jason L.

    Using our recent results on eigenvalues of invariants associated to the Lie superalgebra gl(m|n), we use characteristic identities to derive explicit matrix element formulae for all gl(m|n) generators, particularly non-elementary generators, on finite dimensional type 1 unitary irreducible representations. We compare our results with existing works that deal with only subsets of the class of type 1 unitary representations, all of which only present explicit matrix elements for elementary generators. Our work therefore provides an important extension to existing methods, and thus highlights the strength of our techniques which exploit the characteristic identities.

  16. Polymer/Silicate Nanocomposites Developed for Improved Thermal Stability and Barrier Properties

    NASA Technical Reports Server (NTRS)

    Campbell, Sandi G.

    2001-01-01

    The nanoscale reinforcement of polymers is becoming an attractive means of improving the properties and stability of polymers. Polymer-silicate nanocomposites are a relatively new class of materials with phase dimensions typically on the order of a few nanometers. Because of their nanometer-size features, nanocomposites possess unique properties typically not shared by more conventional composites. Polymer-layered silicate nanocomposites can attain a certain degree of stiffness, strength, and barrier properties with far less ceramic content than comparable glass- or mineral-reinforced polymers. Reinforcement of existing and new polyimides by this method offers an opportunity to greatly improve existing polymer properties without altering current synthetic or processing procedures.

  17. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  18. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGES

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  19. A hierarchical model for probabilistic independent component analysis of multi-subject fMRI studies

    PubMed Central

    Tang, Li

    2014-01-01

    Summary An important goal in fMRI studies is to decompose the observed series of brain images to identify and characterize underlying brain functional networks. Independent component analysis (ICA) has been shown to be a powerful computational tool for this purpose. Classic ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix. Existing group ICA methods generally concatenate observed fMRI data across subjects on the temporal domain and then decompose multi-subject data in a similar manner to single-subject ICA. The major limitation of existing methods is that they ignore between-subject variability in spatial distributions of brain functional networks in group ICA. In this paper, we propose a new hierarchical probabilistic group ICA method to formally model subject-specific effects in both temporal and spatial domains when decomposing multi-subject fMRI data. The proposed method provides model-based estimation of brain functional networks at both the population and subject level. An important advantage of the hierarchical model is that it provides a formal statistical framework to investigate similarities and differences in brain functional networks across subjects, e.g., subjects with mental disorders or neurodegenerative diseases such as Parkinson’s as compared to normal subjects. We develop an EM algorithm for model estimation where both the E-step and M-step have explicit forms. We compare the performance of the proposed hierarchical model with that of two popular group ICA methods via simulation studies. We illustrate our method with application to an fMRI study of Zen meditation. PMID:24033125

  20. A Comparative Analysis of Child Welfare Services through the Eyes of African American, Caucasian, and Latino Parents

    ERIC Educational Resources Information Center

    Ayon, Cecilia; Lee, Cheryl D.

    2005-01-01

    Objective: The purpose of this study was to find if differences exist among 88 African American, Caucasian, and Latino families who received child welfare services. Method: A secondary data analysis of cross-sectional survey data employing standardized measures was used for this study. Family preservation (FP) services were received by 49…

  1. Assessment of modification factors for a row of bolts or timber connectors

    Treesearch

    Thomas Lee Wilkinson

    1980-01-01

    When bolts or timber connectors are used in a row, with load applied parallel to the row, load will be unequally distributed among the fasteners. This study assessed methods of predicting this unequal load distribution, looked at how joint variables can affect the distribution, and compared the predictions with data existing in the literature. Presently used design...

  2. Active-Passive-Intuitive Learning Theory: A Unified Theory of Learning and Development

    ERIC Educational Resources Information Center

    Sigette, Tyson

    2009-01-01

    This paper addresses many theories of learning and human development which are very similar with regards as to how they suggest learning occurs. The differences in most of the theories exist in how they treat the development of the learner compared to methods of teaching. Most of the major learning theories taught to educators today are based on…

  3. Development of a Short Form of the Boston Naming Test for Individuals with Aphasia

    ERIC Educational Resources Information Center

    del Toro, Christina M.; Bislick, Lauren P.; Comer, Matthew; Velozo, Craig; Romero, Sergio; Rothi, Leslie J. Gonzalez; Kendall, Diane L.

    2011-01-01

    Purpose: The purpose of this study was to develop a short form of the Boston Naming Test (BNT; Kaplan, Goodglass, & Weintraub, 2001) for individuals with aphasia and compare it with 2 existing short forms originally analyzed with responses from people with dementia and neurologically healthy adults. Method: Development of the new BNT-Aphasia Short…

  4. Experimental Study Comparing a Traditional Approach to Performance Appraisal Training to a Whole-Brain Training Method at C.B. Fleet Laboratories

    ERIC Educational Resources Information Center

    Selden, Sally; Sherrier, Tom; Wooters, Robert

    2012-01-01

    The purpose of this study is to examine the effects of a new approach to performance appraisal training. Motivated by split-brain theory and existing studies of cognitive information processing and performance appraisals, this exploratory study examined the effects of a whole-brain approach to training managers for implementing performance…

  5. Anthro-Centric Multisensory Interfaces for Sensory Augmentation of Telesurgery

    DTIC Science & Technology

    2011-06-01

    compares favorably to standing astride an operating table using laparoscopic instruments, the most favorable ergonomics would facilitate free movement...either through direct contact with the tissues or indirect contact via rigid laparoscopic instruments), opportunities now exist to utilize other...tele-surgical methods. Laparoscopic instruments were initially developed as extended versions of their counterparts used in open procedures (e.g

  6. A Quantitative Experimental Study of the Effectiveness of Systems to Identify Network Attackers

    ERIC Educational Resources Information Center

    Handorf, C. Russell

    2016-01-01

    This study analyzed the meta-data collected from a honeypot that was run by the Federal Bureau of Investigation for a period of 5 years. This analysis compared the use of existing industry methods and tools, such as Intrusion Detection System alerts, network traffic flow and system log traffic, within the Open Source Security Information Manager…

  7. Comparison of Coaches' Perceptions and Officials Guidance towards Health Promotion in French Sport Clubs: A Mixed Method Study

    ERIC Educational Resources Information Center

    Van Hoye, A.; Heuzé, J.-P.; Larsen, T.; Sarrazin, P.

    2016-01-01

    Despite the call to improve health promotion (HP) in sport clubs in the existing literature, little is known about sport clubs' organizational capacity. Grounded within the setting-based framework, this study compares HP activities and guidance among 10 football clubs. At least three grassroots coaches from each club (n = 68) completed the Health…

  8. Estimating root biomass and distribution after fire in a Great Basin woodland using cores and pits

    Treesearch

    Benjamin M. Rau; Dale W. Johnson; Jeanne C. Chambers; Robert R. Blank; Annmarie Lucchesi

    2009-01-01

    Quantifying root biomass is critical to an estimation and understanding of ecosystem net primary production, biomass partitioning, and belowground competition. We compared 2 methods for determining root biomass: a new soil-coring technique and traditional excavation of quantitative pits. We conducted the study in an existing Joint Fire Sciences demonstration area in...

  9. An Empirical Study of the Distributional Changes in Higher Education among East, Middle and West China

    ERIC Educational Resources Information Center

    Jiang, Chunjiao; Li, Song

    2008-01-01

    Based on the quantitative research and comparative study method, this paper attempts to make a systematic study and analysis of regional differences which have existed since 1949 in higher education among East, Middle and West China. The study is intended to explore the causes, regional differences, social changes, and their co-related…

  10. Reader Reaction On the generalized Kruskal-Wallis test for genetic association studies incorporating group uncertainty

    PubMed Central

    Wu, Baolin; Guan, Weihua

    2015-01-01

    Summary Acar and Sun (2013, Biometrics, 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. PMID:25351417

  11. Reader reaction on the generalized Kruskal-Wallis test for genetic association studies incorporating group uncertainty.

    PubMed

    Wu, Baolin; Guan, Weihua

    2015-06-01

    Acar and Sun (2013, Biometrics 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. © 2014, The International Biometric Society.

  12. Effect of manufacturing defects on optical performance of discontinuous freeform lenses.

    PubMed

    Wang, Kai; Liu, Sheng; Chen, Fei; Liu, Zongyuan; Luo, Xiaobing

    2009-03-30

    Discontinuous freeform lens based secondary optics are essential to LED illumination systems. Surface roughness and smooth transition between two discrete sub-surfaces are two of the most common manufacturing defects existing in discontinuous freeform lenses. The effects of these two manufacturing defects on the optical performance of two discontinuous freeform lenses were investigated by comparing the experimental results with the numerical simulation results based on Monte Carlo ray trace method. The results demonstrated that manufacturing defects induced surface roughness had small effect on the light output efficiency and the shape of light pattern of the PMMA lens but significantly affected the uniformity of light pattern, which declined from 0.644 to 0.313. The smooth transition surfaces with deviation angle more than 60 degrees existing in the BK7 glass lens, not only reduced the uniformity of light pattern, but also reduced the light output efficiency from 96.9% to 91.0% and heavily deformed the shape of the light pattern. Comparing with the surface roughness, the smooth transition surface had a much more adverse effect on the optical performance of discontinuous freeform lenses. Three methods were suggested to improve the illumination performance according to the analysis and discussion.

  13. Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming

    NASA Astrophysics Data System (ADS)

    Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita

    2018-03-01

    We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.

  14. Trumpet intonation pedagogy in the United States at the beginning of the twenty-first century

    NASA Astrophysics Data System (ADS)

    Flunker, Joel Kent

    Although a wealth of pedagogical material exists for most aspects of trumpet performance, there is a comparatively small body of such material devoted to intonation. In addition, while there is general agreement that good intonation is essential to a successful performance, there is no widely accepted methodology for helping students achieve it. This study first identifies the factors that influence trumpet intonation and explores the reasons why intonation receives less pedagogical emphasis than other performance elements. It continues with an examination of how intonation is addressed in existing published materials. This summary of the approaches used in pedagogical publications is compared with methods practiced in applied trumpet studios, based on interviews with respected trumpet professors in major music schools and conservatories. The goal of the study is to define and highlight issues that can help provide a clear focus for future attempts to improve the way intonation is taught and studied.

  15. Comparison of Coupled Radiative Flow Solutions with Project Fire 2 Flight Data

    NASA Technical Reports Server (NTRS)

    Olynick, David R.; Henline, W. D.; Chambers, Lin Hartung; Candler, G. V.

    1995-01-01

    A nonequilibrium, axisymmetric, Navier-Stokes flow solver with coupled radiation has been developed for use in the design or thermal protection systems for vehicles where radiation effects are important. The present method has been compared with an existing now and radiation solver and with the Project Fire 2 experimental data. Good agreement has been obtained over the entire Fire 2 trajectory with the experimentally determined values of the stagnation radiation intensity in the 0.2-6.2 eV range and with the total stagnation heating. The effects of a number of flow models are examined to determine which combination of physical models produces the best agreement with the experimental data. These models include radiation coupling, multitemperature thermal models, and finite rate chemistry. Finally, the computational efficiency of the present model is evaluated. The radiation properties model developed for this study is shown to offer significant computational savings compared to existing codes.

  16. An integral equation method for calculating sound field diffracted by a rigid barrier on an impedance ground.

    PubMed

    Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun

    2015-09-01

    This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.

  17. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  18. IMPACCT Kids’ Care: a real-world example of stakeholder involvement in comparative effectiveness research

    PubMed Central

    Likumahuwa-Ackman, Sonja; Angier, Heather; Sumic, Aleksandra; Harding, Rose L; Cottrell, Erika K; Cohen, Deborah J; Nelson, Christine A; Burdick, Timothy E; Wallace, Lorraine S; Gallia, Charles; DeVoe, Jennifer E

    2015-01-01

    The Patient-Centered Outcomes Research Institute has accelerated conversations about the importance of actively engaging stakeholders in all aspects of comparative effectiveness research (CER). Other scientific disciplines have a history of stakeholder engagement, yet few empirical examples exist of how these stakeholders can inform and enrich CER. Here we present a case study which includes the methods used to engage stakeholders, what we learned from them, and how we incorporated their ideas in a CER project. We selected stakeholders from key groups, built relationships with them and collected their feedback through interviews, observation and ongoing meetings during the four research process phases: proposal development, adapting study methods, understanding the context and information technology tool design and refinement. PMID:26274796

  19. A Differential Evolution Algorithm Based on Nikaido-Isoda Function for Solving Nash Equilibrium in Nonlinear Continuous Games

    PubMed Central

    He, Feng; Zhang, Wei; Zhang, Guoqiang

    2016-01-01

    A differential evolution algorithm for solving Nash equilibrium in nonlinear continuous games is presented in this paper, called NIDE (Nikaido-Isoda differential evolution). At each generation, parent and child strategy profiles are compared one by one pairwisely, adapting Nikaido-Isoda function as fitness function. In practice, the NE of nonlinear game model with cubic cost function and quadratic demand function is solved, and this method could also be applied to non-concave payoff functions. Moreover, the NIDE is compared with the existing Nash Domination Evolutionary Multiplayer Optimization (NDEMO), the result showed that NIDE was significantly better than NDEMO with less iterations and shorter running time. These numerical examples suggested that the NIDE method is potentially useful. PMID:27589229

  20. IMPACCT Kids' Care: a real-world example of stakeholder involvement in comparative effectiveness research.

    PubMed

    Likumahuwa-Ackman, Sonja; Angier, Heather; Sumic, Aleksandra; Harding, Rose L; Cottrell, Erika K; Cohen, Deborah J; Nelson, Christine A; Burdick, Timothy E; Wallace, Lorraine S; Gallia, Charles; DeVoe, Jennifer E

    2015-08-01

    The Patient-Centered Outcomes Research Institute has accelerated conversations about the importance of actively engaging stakeholders in all aspects of comparative effectiveness research (CER). Other scientific disciplines have a history of stakeholder engagement, yet few empirical examples exist of how these stakeholders can inform and enrich CER. Here we present a case study which includes the methods used to engage stakeholders, what we learned from them, and how we incorporated their ideas in a CER project. We selected stakeholders from key groups, built relationships with them and collected their feedback through interviews, observation and ongoing meetings during the four research process phases: proposal development, adapting study methods, understanding the context and information technology tool design and refinement.

  1. Energy Savings in Cellular Networks Based on Space-Time Structure of Traffic Loads

    NASA Astrophysics Data System (ADS)

    Sun, Jingbo; Wang, Yue; Yuan, Jian; Shan, Xiuming

    Since most of energy consumed by the telecommunication infrastructure is due to the Base Transceiver Station (BTS), switching off BTSs when traffic load is low has been recognized as an effective way of saving energy. In this letter, an energy saving scheme is proposed to minimize the number of active BTSs based on the space-time structure of traffic loads as determined by principal component analysis. Compared to existing methods, our approach models traffic loads more accurately, and has a much smaller input size. As it is implemented in an off-line manner, our scheme also avoids excessive communications and computing overheads. Simulation results show that the proposed method has a comparable performance in energy savings.

  2. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  3. Two methods for proteomic analysis of formalin-fixed, paraffin embedded tissue result in differential protein identification, data quality, and cost.

    PubMed

    Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A

    2015-11-01

    Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets.

    PubMed

    Carrig, Madeline M; Manrique-Vallier, Daniel; Ranby, Krista W; Reiter, Jerome P; Hoyle, Rick H

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches.

  5. A qualitative study of programs for parents with serious mental illness and their children: building practice-based evidence.

    PubMed

    Nicholson, Joanne; Hinden, Beth R; Biebel, Kathleen; Henry, Alexis D; Katz-Leavy, Judith

    2007-10-01

    The rationale for the development of effective programs for parents with serious mental illness and their children is compelling. Using qualitative methods and a grounded theory approach with data obtained in site visits, seven existing programs for parents with mental illness and their children in the United States are described and compared across core components: target population, theory and assumptions, funding, community and agency contexts, essential services and intervention strategies, moderators, and outcomes. The diversity across programs is strongly complemented by shared characteristics, the identification of which provides the foundation for future testing and the development of an evidence base. Challenges in program implementation and sustainability are identified. Qualitative methods are useful, particularly when studying existing programs, in taking steps toward building the evidence base for effective programs for parents with serious mental illness and their children.

  6. Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2012-01-01

    Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.

  7. Horsetail matching: a flexible approach to optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  8. Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements

    PubMed Central

    Kim, Geun-Yeon

    2014-01-01

    The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359

  9. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets

    PubMed Central

    Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

  10. Control system of water flow and casting speed in continuous steel casting

    NASA Astrophysics Data System (ADS)

    Tirian, G. O.; Gheorghiu, C. A.; Hepuţ, T.; Chioncel, C.

    2017-05-01

    This paper presents the results of research based on real data taken from the installation process at Arcelor Mittal Hunedoara. Using Matlab Simulink an intelligent system is made that takes in data from the process and makes real time adjustments in the rate of flow of the cooling water and the speed of casting that eliminates fissures in the poured material from the secondary cooling of steel. Using Matlab Simulink simulation environment allowed for qualitative analysis for various real world situations. Thus, compared to the old method of approach for the problem of cracks forming in the crust of the steel in the continuous casting, this new method, proposed and developed, brings safety and precision in this complex process, thus removing any doubt on the existence or non-existence of cracks and takes the necessary steps to prevent and correct them.

  11. A new iterative approach for multi-objective fault detection observer design and its application to a hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Huang, Di; Duan, Zhisheng

    2018-03-01

    This paper addresses the multi-objective fault detection observer design problems for a hypersonic vehicle. Owing to the fact that parameters' variations, modelling errors and disturbances are inevitable in practical situations, system uncertainty is considered in this study. By fully utilising the orthogonal space information of output matrix, some new understandings are proposed for the construction of Lyapunov matrix. Sufficient conditions for the existence of observers to guarantee the fault sensitivity and disturbance robustness in infinite frequency domain are presented. In order to further relax the conservativeness, slack matrices are introduced to fully decouple the observer gain with the Lyapunov matrices in finite frequency range. Iterative linear matrix inequality algorithms are proposed to obtain the solutions. The simulation examples which contain a Monte Carlo campaign illustrate that the new methods can effectively reduce the design conservativeness compared with the existing methods.

  12. Research in disaster settings: a systematic qualitative review of ethical guidelines.

    PubMed

    Mezinska, Signe; Kakuk, Péter; Mijaljica, Goran; Waligóra, Marcin; O'Mathúna, Dónal P

    2016-10-21

    Conducting research during or in the aftermath of disasters poses many specific practical and ethical challenges. This is particularly the case with research involving human subjects. The extraordinary circumstances of research conducted in disaster settings require appropriate regulations to ensure the protection of human participants. The goal of this study is to systematically and qualitatively review the existing ethical guidelines for disaster research by using the constant comparative method (CCM). We performed a systematic qualitative review of disaster research ethics guidelines to collect and compare existing regulations. Guidelines were identified by a three-tiered search strategy: 1) searching databases (PubMed and Google Scholar), 2) an Internet search (Google), and 3) a search of the references in the included documents from the first two searches. We used the constant comparative method (CCM) for analysis of included guidelines. Fourteen full text guidelines were included for analysis. The included guidelines covered the period 2000-2014. Qualitative analysis of the included guidelines revealed two core themes: vulnerability and research ethics committee review. Within each of the two core themes, various categories and subcategories were identified. Some concepts and terms identified in analyzed guidelines are used in an inconsistent manner and applied in different contexts. Conceptual clarity is needed in this area as well as empirical evidence to support the statements and requirements included in analyzed guidelines.

  13. Exploring the Implications of N Measurement and Model Choice on Using Data for Policy and Land Management Decisions

    NASA Astrophysics Data System (ADS)

    Bell, M. D.; Walker, J. T.

    2017-12-01

    Atmospheric deposition of nitrogen compounds are determined using a variety of measurement and modeling methods. These values are then used to calculate fluxes to the ecosystem which can then be linked to ecological responses. But, for this data to be used outside of the system in which it is developed, it is necessary to understand how the deposition estimates relate to one another. Therefore, we first identified sources of "bulk" deposition data and compared methods, reliability of data, and consistency of results to one another. Then we looked at the variation within photochemical models that are used by Federal Agencies to evaluate national trends. Finally, we identified some best practices for researchers to consider if their assessment is intended for use at broader scales. Empirical measurements used in this assessment include passive collection of atmospheric molecules, throughfall deposition of precipitation, snowpack measurements, and using biomonitors such as lichen. The three most common photochemical models used to model deposition within the United States are CMAQ, CAMx, and TDep (which uses empirical data to refine modeled values). These models all use meteorological and emission data to estimate deposition at local, regional, or national scales. We identified the range of uncertainty that exists within the types of deposition measurements and how these vary over space and time. Uncertainty is assessed by comparing deposition estimates from differing collection methods and comparing modeled estimates to empirical deposition data. Each collection method has benefits and downfalls that need to be taken into account if the results are to be expanded outside of the research area. Comparing field measured values to modeled values highlight the importance of each in the greater goals of understanding current conditions and trends within deposition patterns in the US. While models work well on a larger scale, they cannot replicate the local heterogeneity that exists at a site. Often, each researcher has a favorite method of analysis, but if the data cannot be related to other efforts then it becomes harder to apply it to broader policy considerations.

  14. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  15. Electronic characterization of lithographically patterned microcoils for high sensitivity NMR detection.

    PubMed

    Demas, Vasiliki; Bernhardt, Anthony; Malba, Vince; Adams, Kristl L; Evans, Lee; Harvey, Christopher; Maxwell, Robert S; Herberg, Julie L

    2009-09-01

    Nuclear magnetic resonance (NMR) offers a non-destructive, powerful, structure-specific analytical method for the identification of chemical and biological systems. The use of radio frequency (RF) microcoils has been shown to increase the sensitivity in mass-limited samples. Recent advances in micro-receiver technology have further demonstrated a substantial increase in mass sensitivity [D.L. Olson, T.L. Peck, A.G. Webb, R.L. Magin, J.V. Sweedler, High-resolution microcoil H-1-NMR for mass-limited, nanoliter-volume samples, Science 270 (5244) (1995) 1967-1970]. Lithographic methods for producing solenoid microcoils possess a level of flexibility and reproducibility that exceeds previous production methods, such as hand winding microcoils. This paper presents electrical characterizations of RF microcoils produced by a unique laser lithography system that can pattern three dimensional surfaces and compares calculated and experimental results to those for wire wound RF microcoils. We show that existing optimization conditions for RF coil design still hold true for RF microcoils produced by lithography. Current lithographic microcoils show somewhat inferior performance to wire wound RF microcoils due to limitations in the existing electroplating technique. In principle, however, when the pitch of the RF microcoil is less than 100mum lithographic coils should show comparable performance to wire wound coils. In the cases of larger pitch, wire cross sections can be significantly larger and resistances lower than microfabricated conductors.

  16. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  17. Panel cutting method: new approach to generate panels on a hull in Rankine source potential approximation

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jong; Chun, Ho-Hwan; Park, Il-Ryong; Kim, Jin

    2011-12-01

    In the present study, a new hull panel generation algorithm, namely panel cutting method, was developed to predict flow phenomena around a ship using the Rankine source potential based panel method, where the iterative method was used to satisfy the nonlinear free surface condition and the trim and sinkage of the ship was taken into account. Numerical computations were performed to investigate the validity of the proposed hull panel generation algorithm for Series 60 (CB=0.60) hull and KRISO container ship (KCS), a container ship designed by Maritime and Ocean Engineering Research Institute (MOERI). The computational results were validated by comparing with the existing experimental data.

  18. Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics

    PubMed Central

    de Santos Sierra, Alberto; Ávila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage. PMID:22247658

  19. Gaussian multiscale aggregation applied to segmentation in hand biometrics.

    PubMed

    de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador

    2011-01-01

    This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

  20. An implicit boundary integral method for computing electric potential of macromolecules in solvent

    NASA Astrophysics Data System (ADS)

    Zhong, Yimin; Ren, Kui; Tsai, Richard

    2018-04-01

    A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.

Top