Sample records for likelihood ml methods

  1. Nonlinear phase noise tolerance for coherent optical systems using soft-decision-aided ML carrier phase estimation enhanced with constellation partitioning

    NASA Astrophysics Data System (ADS)

    Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen

    2018-02-01

    A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.

  2. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  3. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  4. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  5. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  6. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  7. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  8. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  9. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  10. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  11. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  12. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  13. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  14. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  15. Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun

    2017-01-01

    The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…

  16. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  17. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  18. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  19. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  20. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  1. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    PubMed

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  2. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  3. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  4. Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures.

    PubMed

    Stamatakis, Alexandros; Ott, Michael

    2008-12-27

    The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.

  5. Characterization of Chronic Aortic and Mitral Regurgitation Undergoing Valve Surgery Using Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Gao, Sinsia A; Johnsson, Åse A; Lagerstrand, Kerstin M; Bech-Hanssen, Odd

    2017-06-15

    Grading of chronic aortic regurgitation (AR) and mitral regurgitation (MR) by cardiovascular magnetic resonance (CMR) is currently based on thresholds, which are neither modality nor quantification method specific. Accordingly, this study sought to identify CMR-specific and quantification method-specific thresholds for regurgitant volumes (RVols), RVol indexes, and regurgitant fractions (RFs), which denote severe chronic AR or MR with an indication for surgery. The study comprised patients with moderate and severe chronic AR (n = 38) and MR (n = 40). Echocardiography and CMR was performed at baseline and in all operated AR/MR patients (n = 23/25) 10 ± 1 months after surgery. CMR quantification of AR: direct (aortic flow) and indirect method (left ventricular stroke volume [LVSV] - pulmonary stroke volume [PuSV]); MR: 2 indirect methods (LVSV - aortic forward flow [AoFF]; mitral inflow [MiIF] - AoFF). All operated patients had severe regurgitation and benefited from surgery, indicated by a significant postsurgical reduction in end-diastolic volume index and improvement or relief of symptoms. The discriminatory ability between moderate and severe AR was strong for RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (direct method) and RVol >62 ml, RVol index >31 ml/m 2 , and RF >36% (LVSV-PuSV) with a negative likelihood ratio ≤ 0.2. In MR, the discriminatory ability was very strong for RVol >64 ml, RVol index >32 ml/m 2 , and RF >41% (LVSV-AoFF) and RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (MiIF-AoFF) with a negative likelihood ratio < 0.1. In conclusion, CMR grading of chronic AR and MR should be based on modality-specific and quantification method-specific thresholds, as they differ largely from recognized guideline criteria, to assure appropriate clinical decision-making and timing of surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  7. The Performance of ML, GLS, and WLS Estimation in Structural Equation Modeling under Conditions of Misspecification and Nonnormality.

    ERIC Educational Resources Information Center

    Olsson, Ulf Henning; Foss, Tron; Troye, Sigurd V.; Howell, Roy D.

    2000-01-01

    Used simulation to demonstrate how the choice of estimation method affects indexes of fit and parameter bias for different sample sizes when nested models vary in terms of specification error and the data demonstrate different levels of kurtosis. Discusses results for maximum likelihood (ML), generalized least squares (GLS), and weighted least…

  8. Addressing Data Analysis Challenges in Gravitational Wave Searches Using the Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Weerathunga, Thilina Shihan

    2017-08-01

    Gravitational waves are a fundamental prediction of Einstein's General Theory of Relativity. The first experimental proof of their existence was provided by the Nobel Prize winning discovery by Taylor and Hulse of orbital decay in a binary pulsar system. The first detection of gravitational waves incident on earth from an astrophysical source was announced in 2016 by the LIGO Scientific Collaboration, launching the new era of gravitational wave (GW) astronomy. The signal detected was from the merger of two black holes, which is an example of sources called Compact Binary Coalescences (CBCs). Data analysis strategies used in the search for CBC signals are derivatives of the Maximum-Likelihood (ML) method. The ML method applied to data from a network of geographically distributed GW detectors--called fully coherent network analysis--is currently the best approach for estimating source location and GW polarization waveforms. However, in the case of CBCs, especially for lower mass systems (O(1M solar masses)) such as double neutron star binaries, fully coherent network analysis is computationally expensive. The ML method requires locating the global maximum of the likelihood function over a nine dimensional parameter space, where the computation of the likelihood at each point requires correlations involving O(104) to O(106) samples between the data and the corresponding candidate signal waveform template. Approximations, such as semi-coherent coincidence searches, are currently used to circumvent the computational barrier but incur a concomitant loss in sensitivity. We explored the effectiveness of Particle Swarm Optimization (PSO), a well-known algorithm in the field of swarm intelligence, in addressing the fully coherent network analysis problem. As an example, we used a four-detector network consisting of the two LIGO detectors at Hanford and Livingston, Virgo and Kagra, all having initial LIGO noise power spectral densities, and show that PSO can locate the global maximum with less than 240,000 likelihood evaluations for a component mass range of 1.0 to 10.0 solar masses at a realistic coherent network signal to noise ratio of 9.0. Our results show that PSO can successfully deliver a fully-coherent all-sky search with < (1/10 ) the number of likelihood evaluations needed for a grid-based search. Used as a follow-up step, the savings in the number of likelihood evaluations may also reduce latency in obtaining ML estimates of source parameters in semi-coherent searches.

  9. Is the ML Chi-Square Ever Robust to Nonnormality? A Cautionary Note with Missing Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2008-01-01

    Normal theory maximum likelihood (ML) is by far the most popular estimation and testing method used in structural equation modeling (SEM), and it is the default in most SEM programs. Even though this approach assumes multivariate normality of the data, its use can be justified on the grounds that it is fairly robust to the violations of the…

  10. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    PubMed Central

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  11. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  12. Inverse problems-based maximum likelihood estimation of ground reflectivity for selected regions of interest from stripmap SAR data [Regularized maximum likelihood estimation of ground reflectivity from stripmap SAR data

    DOE PAGES

    West, R. Derek; Gunther, Jacob H.; Moon, Todd K.

    2016-12-01

    In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less

  13. Estimation of channel parameters and background irradiance for free-space optical link.

    PubMed

    Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk

    2013-05-10

    Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.

  14. Maximum likelihood orientation estimation of 1-D patterns in Laguerre-Gauss subspaces.

    PubMed

    Di Claudio, Elio D; Jacovitti, Giovanni; Laurenti, Alberto

    2010-05-01

    A method for measuring the orientation of linear (1-D) patterns, based on a local expansion with Laguerre-Gauss circular harmonic (LG-CH) functions, is presented. It lies on the property that the polar separable LG-CH functions span the same space as the 2-D Cartesian separable Hermite-Gauss (2-D HG) functions. Exploiting the simple steerability of the LG-CH functions and the peculiar block-linear relationship among the two expansion coefficients sets, maximum likelihood (ML) estimates of orientation and cross section parameters of 1-D patterns are obtained projecting them in a proper subspace of the 2-D HG family. It is shown in this paper that the conditional ML solution, derived by elimination of the cross section parameters, surprisingly yields the same asymptotic accuracy as the ML solution for known cross section parameters. The accuracy of the conditional ML estimator is compared to the one of state of art solutions on a theoretical basis and via simulation trials. A thorough proof of the key relationship between the LG-CH and the 2-D HG expansions is also provided.

  15. Ancestral sequence reconstruction in primate mitochondrial DNA: compositional bias and effect on functional inference.

    PubMed

    Krishnan, Neeraja M; Seligmann, Hervé; Stewart, Caro-Beth; De Koning, A P Jason; Pollock, David D

    2004-10-01

    Reconstruction of ancestral DNA and amino acid sequences is an important means of inferring information about past evolutionary events. Such reconstructions suggest changes in molecular function and evolutionary processes over the course of evolution and are used to infer adaptation and convergence. Maximum likelihood (ML) is generally thought to provide relatively accurate reconstructed sequences compared to parsimony, but both methods lead to the inference of multiple directional changes in nucleotide frequencies in primate mitochondrial DNA (mtDNA). To better understand this surprising result, as well as to better understand how parsimony and ML differ, we constructed a series of computationally simple "conditional pathway" methods that differed in the number of substitutions allowed per site along each branch, and we also evaluated the entire Bayesian posterior frequency distribution of reconstructed ancestral states. We analyzed primate mitochondrial cytochrome b (Cyt-b) and cytochrome oxidase subunit I (COI) genes and found that ML reconstructs ancestral frequencies that are often more different from tip sequences than are parsimony reconstructions. In contrast, frequency reconstructions based on the posterior ensemble more closely resemble extant nucleotide frequencies. Simulations indicate that these differences in ancestral sequence inference are probably due to deterministic bias caused by high uncertainty in the optimization-based ancestral reconstruction methods (parsimony, ML, Bayesian maximum a posteriori). In contrast, ancestral nucleotide frequencies based on an average of the Bayesian set of credible ancestral sequences are much less biased. The methods involving simpler conditional pathway calculations have slightly reduced likelihood values compared to full likelihood calculations, but they can provide fairly unbiased nucleotide reconstructions and may be useful in more complex phylogenetic analyses than considered here due to their speed and flexibility. To determine whether biased reconstructions using optimization methods might affect inferences of functional properties, ancestral primate mitochondrial tRNA sequences were inferred and helix-forming propensities for conserved pairs were evaluated in silico. For ambiguously reconstructed nucleotides at sites with high base composition variability, ancestral tRNA sequences from Bayesian analyses were more compatible with canonical base pairing than were those inferred by other methods. Thus, nucleotide bias in reconstructed sequences apparently can lead to serious bias and inaccuracies in functional predictions.

  16. Variational Bayesian Parameter Estimation Techniques for the General Linear Model

    PubMed Central

    Starke, Ludger; Ostwald, Dirk

    2017-01-01

    Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572

  17. Improving Depth, Energy and Timing Estimation in PET Detectors with Deconvolution and Maximum Likelihood Pulse Shape Discrimination

    PubMed Central

    Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.

    2016-01-01

    In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658

  18. Improving Depth, Energy and Timing Estimation in PET Detectors with Deconvolution and Maximum Likelihood Pulse Shape Discrimination.

    PubMed

    Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R

    2016-11-01

    In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.

  19. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.

  20. Fitting power-laws in empirical data with estimators that work for all exponents

    PubMed Central

    Hanel, Rudolf; Corominas-Murtra, Bernat; Liu, Bo; Thurner, Stefan

    2017-01-01

    Most standard methods based on maximum likelihood (ML) estimates of power-law exponents can only be reliably used to identify exponents smaller than minus one. The argument that power laws are otherwise not normalizable, depends on the underlying sample space the data is drawn from, and is true only for sample spaces that are unbounded from above. Power-laws obtained from bounded sample spaces (as is the case for practically all data related problems) are always free of such limitations and maximum likelihood estimates can be obtained for arbitrary powers without restrictions. Here we first derive the appropriate ML estimator for arbitrary exponents of power-law distributions on bounded discrete sample spaces. We then show that an almost identical estimator also works perfectly for continuous data. We implemented this ML estimator and discuss its performance with previous attempts. We present a general recipe of how to use these estimators and present the associated computer codes. PMID:28245249

  1. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  2. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  3. RAxML-VI-HPC: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models.

    PubMed

    Stamatakis, Alexandros

    2006-11-01

    RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak

  4. Comparing methods of analysing datasets with small clusters: case studies using four paediatric datasets.

    PubMed

    Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil

    2009-07-01

    Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.

  5. Application and performance of an ML-EM algorithm in NEXT

    NASA Astrophysics Data System (ADS)

    Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.

    2017-08-01

    The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.

  6. Maximum likelihood: Extracting unbiased information from complex networks

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego; Loffredo, Maria I.

    2008-07-01

    The choice of free parameters in network models is subjective, since it depends on what topological properties are being monitored. However, we show that the maximum likelihood (ML) principle indicates a unique, statistically rigorous parameter choice, associated with a well-defined topological feature. We then find that, if the ML condition is incompatible with the built-in parameter choice, network models turn out to be intrinsically ill defined or biased. To overcome this problem, we construct a class of safely unbiased models. We also propose an extension of these results that leads to the fascinating possibility to extract, only from topological data, the “hidden variables” underlying network organization, making them “no longer hidden.” We test our method on World Trade Web data, where we recover the empirical gross domestic product using only topological information.

  7. MRL and SuperFine+MRL: new supertree methods

    PubMed Central

    2012-01-01

    Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525

  8. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  9. Local Influence and Robust Procedures for Mediation Analysis

    ERIC Educational Resources Information Center

    Zu, Jiyun; Yuan, Ke-Hai

    2010-01-01

    Existing studies of mediation models have been limited to normal-theory maximum likelihood (ML). Because real data in the social and behavioral sciences are seldom normally distributed and often contain outliers, classical methods generally lead to inefficient or biased parameter estimates. Consequently, the conclusions from a mediation analysis…

  10. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  11. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  12. Evaluation of selective control information detection scheme in orthogonal frequency division multiplexing-based radio-over-fiber and visible light communication links

    NASA Astrophysics Data System (ADS)

    Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.

    2017-05-01

    Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.

  13. Maximum likelihood positioning algorithm for high-resolution PET scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick

    2016-06-15

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less

  14. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.

  15. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  16. Improvement of range spatial resolution of medical ultrasound imaging by element-domain signal processing

    NASA Astrophysics Data System (ADS)

    Hasegawa, Hideyuki

    2017-07-01

    The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).

  17. The evolution of autodigestion in the mushroom family Psathyrellaceae (Agaricales) inferred from Maximum Likelihood and Bayesian methods.

    PubMed

    Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba

    2010-12-01

    Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution. Copyright © 2010 Elsevier Inc. All rights reserved.

  18. MIXED MODEL AND ESTIMATING EQUATION APPROACHES FOR ZERO INFLATION IN CLUSTERED BINARY RESPONSE DATA WITH APPLICATION TO A DATING VIOLENCE STUDY1

    PubMed Central

    Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.

    2016-01-01

    The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263

  19. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  20. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  1. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  2. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  3. Safety modeling of urban arterials in Shanghai, China.

    PubMed

    Wang, Xuesong; Fan, Tianxiang; Chen, Ming; Deng, Bing; Wu, Bing; Tremont, Paul

    2015-10-01

    Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai's urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.

  5. When Can Categorical Variables Be Treated as Continuous? A Comparison of Robust Continuous and Categorical SEM Estimation Methods under Suboptimal Conditions

    ERIC Educational Resources Information Center

    Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria

    2012-01-01

    A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…

  6. Ensemble Learning Method for Hidden Markov Models

    DTIC Science & Technology

    2014-12-01

    Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we

  7. An Investigation of the Sample Performance of Two Nonnormality Corrections for RMSEA

    ERIC Educational Resources Information Center

    Brosseau-Liard, Patricia E.; Savalei, Victoria; Li, Libo

    2012-01-01

    The root mean square error of approximation (RMSEA) is a popular fit index in structural equation modeling (SEM). Typically, RMSEA is computed using the normal theory maximum likelihood (ML) fit function. Under nonnormality, the uncorrected sample estimate of the ML RMSEA tends to be inflated. Two robust corrections to the sample ML RMSEA have…

  8. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  9. Quantifying the Strength of General Factors in Psychopathology: A Comparison of CFA with Maximum Likelihood Estimation, BSEM, and ESEM/EFA Bifactor Approaches.

    PubMed

    Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis

    2018-05-22

    Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).

  10. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    PubMed Central

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  11. Evaluation of dynamic row-action maximum likelihood algorithm reconstruction for quantitative 15O brain PET.

    PubMed

    Ibaraki, Masanobu; Sato, Kaoru; Mizuta, Tetsuro; Kitamura, Keishi; Miura, Shuichi; Sugawara, Shigeki; Shinohara, Yuki; Kinoshita, Toshibumi

    2009-09-01

    A modified version of row-action maximum likelihood algorithm (RAMLA) using a 'subset-dependent' relaxation parameter for noise suppression, or dynamic RAMLA (DRAMA), has been proposed. The aim of this study was to assess the capability of DRAMA reconstruction for quantitative (15)O brain positron emission tomography (PET). Seventeen healthy volunteers were studied using a 3D PET scanner. The PET study included 3 sequential PET scans for C(15)O, (15)O(2) and H (2) (15) O. First, the number of main iterations (N (it)) in DRAMA was optimized in relation to image convergence and statistical image noise. To estimate the statistical variance of reconstructed images on a pixel-by-pixel basis, a sinogram bootstrap method was applied using list-mode PET data. Once the optimal N (it) was determined, statistical image noise and quantitative parameters, i.e., cerebral blood flow (CBF), cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO(2)) and oxygen extraction fraction (OEF) were compared between DRAMA and conventional FBP. DRAMA images were post-filtered so that their spatial resolutions were matched with FBP images with a 6-mm FWHM Gaussian filter. Based on the count recovery data, N (it) = 3 was determined as an optimal parameter for (15)O PET data. The sinogram bootstrap analysis revealed that DRAMA reconstruction resulted in less statistical noise, especially in a low-activity region compared to FBP. Agreement of quantitative values between FBP and DRAMA was excellent. For DRAMA images, average gray matter values of CBF, CBV, CMRO(2) and OEF were 46.1 +/- 4.5 (mL/100 mL/min), 3.35 +/- 0.40 (mL/100 mL), 3.42 +/- 0.35 (mL/100 mL/min) and 42.1 +/- 3.8 (%), respectively. These values were comparable to corresponding values with FBP images: 46.6 +/- 4.6 (mL/100 mL/min), 3.34 +/- 0.39 (mL/100 mL), 3.48 +/- 0.34 (mL/100 mL/min) and 42.4 +/- 3.8 (%), respectively. DRAMA reconstruction is applicable to quantitative (15)O PET study and is superior to conventional FBP in terms of image quality.

  12. An artifact caused by undersampling optimal trees in supermatrix analyses of locally sampled characters.

    PubMed

    Simmons, Mark P; Goloboff, Pablo A

    2013-10-01

    Empirical and simulated examples are used to demonstrate an artifact caused by undersampling optimal trees in data matrices that consist mostly or entirely of locally sampled (as opposed to globally, for most or all terminals) characters. The artifact is that unsupported clades consisting entirely of terminals scored for the same locally sampled partition may be resolved and assigned high resampling support-despite their being properly unsupported (i.e., not resolved in the strict consensus of all optimal trees). This artifact occurs despite application of random-addition sequences for stepwise terminal addition. The artifact is not necessarily obviated with thorough conventional branch swapping methods (even tree-bisection-reconnection) when just a single tree is held, as is sometimes implemented in parsimony bootstrap pseudoreplicates, and in every GARLI, PhyML, and RAxML pseudoreplicate and search for the most likely tree for the matrix as a whole. Hence GARLI, RAxML, and PhyML-based likelihood results require extra scrutiny, particularly when they provide high resolution and support for clades that are entirely unsupported by methods that perform more thorough searches, as in most parsimony analyses. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  14. On the uncertainty in single molecule fluorescent lifetime and energy emission measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; Mccollom, Alex D.

    1995-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least square methods agree and are optimal when the number of detected photons is large however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67% of those can be noise and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous poisson processes, we derive the exact joint arrival time probably density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. the ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background nose and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  15. On the Uncertainty in Single Molecule Fluorescent Lifetime and Energy Emission Measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; McCollom, Alex D.

    1996-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least squares methods agree and are optimal when the number of detected photons is large, however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67 percent of those can be noise, and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous Poisson processes, we derive the exact joint arrival time probability density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. The ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background noise and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  16. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  17. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  18. Assessment of phylogenetic sensitivity for reconstructing HIV-1 epidemiological relationships.

    PubMed

    Beloukas, Apostolos; Magiorkinis, Emmanouil; Magiorkinis, Gkikas; Zavitsanou, Asimina; Karamitros, Timokratis; Hatzakis, Angelos; Paraskevis, Dimitrios

    2012-06-01

    Phylogenetic analysis has been extensively used as a tool for the reconstruction of epidemiological relations for research or for forensic purposes. It was our objective to assess the sensitivity of different phylogenetic methods and various phylogenetic programs to reconstruct epidemiological links among HIV-1 infected patients that is the probability to reveal a true transmission relationship. Multiple datasets (90) were prepared consisting of HIV-1 sequences in protease (PR) and partial reverse transcriptase (RT) sampled from patients with documented epidemiological relationship (target population), and from unrelated individuals (control population) belonging to the same HIV-1 subtype as the target population. Each dataset varied regarding the number, the geographic origin and the transmission risk groups of the sequences among the control population. Phylogenetic trees were inferred by neighbor-joining (NJ), maximum likelihood heuristics (hML) and Bayesian methods. All clusters of sequences belonging to the target population were correctly reconstructed by NJ and Bayesian methods receiving high bootstrap and posterior probability (PP) support, respectively. On the other hand, TreePuzzle failed to reconstruct or provide significant support for several clusters; high puzzling step support was associated with the inclusion of control sequences from the same geographic area as the target population. In contrary, all clusters were correctly reconstructed by hML as implemented in PhyML 3.0 receiving high bootstrap support. We report that under the conditions of our study, hML using PhyML, NJ and Bayesian methods were the most sensitive for the reconstruction of epidemiological links mostly from sexually infected individuals. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC

    ERIC Educational Resources Information Center

    Depaoli, Sarah

    2012-01-01

    Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…

  20. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    PubMed

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  1. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  2. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  3. Estimating the arrival times of photon-limited laser pulses in the presence of shot and speckle noise

    NASA Technical Reports Server (NTRS)

    Abshire, James B.; Mcgarry, Jan F.

    1987-01-01

    Maximum-likelihood (ML) receivers are frequently used to optimize the timing performance of laser-ranging and laser-altimetry systems in the presence of shot and speckle noise. Monte Carlo method was used to examine ML-receiver performance with return signals in the 10-5000-photoelectron (pe) range. The simulations were performed for shot noise only and for shot and speckle noise. The results agree with previous theory for signal strengths greater than about 100 pe's but show that the theory can significantly underestimate timing errors for weaker received signals. Sharp high-bandwidth features in the detected signals are shown to improve timing performance only if their signal levels are greater than 4-5 pe's.

  4. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  5. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  6. Streptomyces kronopolitis sp. nov., an actinomycete that produces phoslactomycins isolated from a millipede (Kronopolites svenhedind Verhoeff).

    PubMed

    Liu, Chongxi; Ye, Lan; Li, Yao; Jiang, Shanwen; Liu, Hui; Yan, Kai; Xiang, Wensheng; Wang, Xiangjing

    2016-12-01

    A phoslactomycin-producing actinomycete, designated strain NEAU-ML8T, was isolated from a millipede (Kronopolites svenhedind Verhoeff) and characterized using a polyphasic approach. 16S rRNA gene sequence analysis showed that strain NEAU-ML8T belongs to the genus Streptomyces with the highest sequence similarities to Streptomyces lydicus NBRC 13058T (99.39 %) and Streptomyces chattanoogensis DSM 40002T (99.25 %). The maximum-likelihood phylogenetic tree based on 16S rRNA gene sequences showed that the isolate formed a distinct phyletic line with NBRC 13058T and S. chattanoogensis DSM 40002T. This branching pattern was also supported by the tree rconstructed with the neighbour-joining method. A combination of DNA-DNA hybridization experiments and phenotypic tests were carried out between strain NEAU-ML8T and its phylogenetically closely related strains, which further clarified their relatedness and demonstrated that NEAU-ML8T could be distinguished from NBRC 13058T and S. chattanoogensis DSM 40002T. Therefore, it is concluded that strain NEAU-ML8T can be classified as representing a novel species of the genus Streptomyces, for which the name Streptomyces kronopolitis sp. nov. is proposed. The type strain is NEAU-ML8T (=DSM 101986T=CGMCC 4.7323T).

  7. Impact of D-Dimer for Prediction of Incident Occult Cancer in Patients with Unprovoked Venous Thromboembolism

    PubMed Central

    Han, Donghee; ó Hartaigh, Bríain; Lee, Ji Hyun; Cho, In-Jeong; Shim, Chi Young; Chang, Hyuk-Jae; Hong, Geu-Ru; Ha, Jong-Won; Chung, Namsik

    2016-01-01

    Background Unprovoked venous thromboembolism (VTE) is related to a higher incidence of occult cancer. D-dimer is clinically used for screening VTE, and has often been shown to be present in patients with malignancy. We explored the predictive value of D-dimer for detecting occult cancer in patients with unprovoked VTE. Methods We retrospectively examined data from 824 patients diagnosed with deep vein thrombosis or pulmonary thromboembolism. Of these, 169 (20.5%) patients diagnosed with unprovoked VTE were selected to participate in this study. D-dimer was categorized into three groups as: <2,000, 2,000–4,000, and >4,000 ng/ml. Cox regression analysis was employed to estimate the odds of occult cancer and metastatic state of cancer according to D-dimer categories. Results During a median 5.3 (interquartile range: 3.4–6.7) years of follow-up, 24 (14%) patients with unprovoked VTE were diagnosed with cancer. Of these patients, 16 (67%) were identified as having been diagnosed with metastatic cancer. Log transformed D-dimer levels were significantly higher in those with occult cancer as compared with patients without diagnosis of occult cancer (3.5±0.5 vs. 3.2±0.5, P-value = 0.009, respectively). D-dimer levels >4,000 ng/ml was independently associated with occult cancer (HR: 4.12, 95% CI: 1.54–11.04, P-value = 0.005) when compared with D-dimer levels <2,000 ng/ml, even after adjusting for age, gender, and type of VTE (e.g., deep vein thrombosis or pulmonary thromboembolism). D-dimer levels >4000 ng/ml were also associated with a higher likelihood of metastatic cancer (HR: 9.55, 95% CI: 2.46–37.17, P-value <0.001). Conclusion Elevated D-dimer concentrations >4000 ng/ml are independently associated with the likelihood of occult cancer among patients with unprovoked VTE. PMID:27073982

  8. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  9. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing

    PubMed Central

    Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.

    2007-01-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837

  10. A tree island approach to inferring phylogeny in the ant subfamily Formicinae, with especial reference to the evolution of weaving.

    PubMed

    Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H

    2003-11-01

    The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.

  11. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  12. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  13. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation

    PubMed Central

    Yu, Hongyi

    2018-01-01

    A novel geolocation architecture, termed “Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)” is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér–Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML. PMID:29562601

  14. Direct Position Determination of Unknown Signals in the Presence of Multipath Propagation.

    PubMed

    Du, Jianping; Wang, Ding; Yu, Wanting; Yu, Hongyi

    2018-03-17

    A novel geolocation architecture, termed "Multiple Transponders and Multiple Receivers for Multiple Emitters Positioning System (MTRE)" is proposed in this paper. Existing Direct Position Determination (DPD) methods take advantage of a rather simple channel assumption (line of sight channels with complex path attenuations) and a simplified MUltiple SIgnal Classification (MUSIC) algorithm cost function to avoid the high dimension searching. We point out that the simplified assumption and cost function reduce the positioning accuracy because of the singularity of the array manifold in a multi-path environment. We present a DPD model for unknown signals in the presence of Multi-path Propagation (MP-DPD) in this paper. MP-DPD adds non-negative real path attenuation constraints to avoid the mistake caused by the singularity of the array manifold. The Multi-path Propagation MUSIC (MP-MUSIC) method and the Active Set Algorithm (ASA) are designed to reduce the dimension of searching. A Multi-path Propagation Maximum Likelihood (MP-ML) method is proposed in addition to overcome the limitation of MP-MUSIC in the sense of a time-sensitive application. An iterative algorithm and an approach of initial value setting are given to make the MP-ML time consumption acceptable. Numerical results validate the performances improvement of MP-MUSIC and MP-ML. A closed form of the Cramér-Rao Lower Bound (CRLB) is derived as a benchmark to evaluate the performances of MP-MUSIC and MP-ML.

  15. Phylogeny of the cycads based on multiple single-copy nuclear genes: congruence of concatenated parsimony, likelihood and species tree inference methods.

    PubMed

    Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra

    2013-11-01

    Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree-species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia-Lepidozamia-Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial classification of Zamiaceae.

  16. A Two-Stage Approach to Missing Data: Theory and Application to Auxiliary Variables

    ERIC Educational Resources Information Center

    Savalei, Victoria; Bentler, Peter M.

    2009-01-01

    A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a…

  17. SEM with Missing Data and Unknown Population Distributions Using Two-Stage ML: Theory and Its Application

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Lu, Laura

    2008-01-01

    This article provides the theory and application of the 2-stage maximum likelihood (ML) procedure for structural equation modeling (SEM) with missing data. The validity of this procedure does not require the assumption of a normally distributed population. When the population is normally distributed and all missing data are missing at random…

  18. Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.

    PubMed

    Helms Tillery, S I; Taylor, D M; Schwartz, A B

    2003-01-01

    We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.

  19. Maximum likelihood of phylogenetic networks.

    PubMed

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  20. Salivary flow rate and periodontal infection - a study among subjects aged 75 years or older.

    PubMed

    Syrjälä, A-M H; Raatikainen, L; Komulainen, K; Knuuttila, M; Ruoppi, P; Hartikainen, S; Sulkava, R; Ylöstalo, P

    2011-05-01

    To analyse the relation of stimulated and unstimulated salivary flow rates to periodontal infection in home-dwelling elderly people aged 75 years or older. This study was based on a subpopulation of 157 (111 women, 46 men) home-dwelling, dentate, non-smoking elderly people (mean age 79.8, SD 3.6 years) from the Geriatric Multidisciplinary Strategy for the Good Care of the Elderly Study). The data were collected by interview and oral clinical examination. Persons with very low (< 0.7 ml min⁻¹) and low stimulated salivary flow rates (0.7- < 1.0 ml min⁻¹) had a decreased likelihood of having teeth with deepened (≥ 4 mm) periodontal pockets, RR: 0.7, CI: 0.5-0.9 and RR: 0.7, CI: 0.5-0.9, respectively, when compared with those with normal stimulated salivary flow. Persons with a very low unstimulated salivary flow rate (< 0.1 ml min⁻¹) had a decreased likelihood of having teeth with deepened (≥ 4 mm) periodontal pockets, RR 0.8, CI: 0.6-1.0, when compared with subjects with low/normal unstimulated salivary flow. In a population of dentate, home-dwelling non-smokers, aged 75 years or older, low stimulated and unstimulated salivary flow rates were weakly associated with a decreased likelihood of having teeth with deep periodontal pockets. © 2010 John Wiley & Sons A/S.

  1. Optimum quantum receiver for detecting weak signals in PAM communication systems

    NASA Astrophysics Data System (ADS)

    Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar

    2017-09-01

    This paper deals with the modeling of an optimum quantum receiver for pulse amplitude modulator (PAM) communication systems. The information bearing sequence {I_k}_{k=0}^{N-1} is estimated using the maximum likelihood (ML) method. The ML method is based on quantum mechanical measurements of an observable X in the Hilbert space of the quantum system at discrete times, when the Hamiltonian of the system is perturbed by an operator obtained by modulating a potential V with a PAM signal derived from the information bearing sequence {I_k}_{k=0}^{N-1}. The measurement process at each time instant causes collapse of the system state to an observable eigenstate. All probabilities of getting different outcomes from an observable are calculated using the perturbed evolution operator combined with the collapse postulate. For given probability densities, calculation of the mean square error evaluates the performance of the receiver. Finally, we present an example involving estimating an information bearing sequence that modulates a quantum electromagnetic field incident on a quantum harmonic oscillator.

  2. DiscML: an R package for estimating evolutionary rates of discrete characters using maximum likelihood.

    PubMed

    Kim, Tane; Hao, Weilong

    2014-09-27

    The study of discrete characters is crucial for the understanding of evolutionary processes. Even though great advances have been made in the analysis of nucleotide sequences, computer programs for non-DNA discrete characters are often dedicated to specific analyses and lack flexibility. Discrete characters often have different transition rate matrices, variable rates among sites and sometimes contain unobservable states. To obtain the ability to accurately estimate a variety of discrete characters, programs with sophisticated methodologies and flexible settings are desired. DiscML performs maximum likelihood estimation for evolutionary rates of discrete characters on a provided phylogeny with the options that correct for unobservable data, rate variations, and unknown prior root probabilities from the empirical data. It gives users options to customize the instantaneous transition rate matrices, or to choose pre-determined matrices from models such as birth-and-death (BD), birth-death-and-innovation (BDI), equal rates (ER), symmetric (SYM), general time-reversible (GTR) and all rates different (ARD). Moreover, we show application examples of DiscML on gene family data and on intron presence/absence data. DiscML was developed as a unified R program for estimating evolutionary rates of discrete characters with no restriction on the number of character states, and with flexibility to use different transition models. DiscML is ideal for the analyses of binary (1s/0s) patterns, multi-gene families, and multistate discrete morphological characteristics.

  3. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  4. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  5. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  6. A machine learning approach using EEG data to predict response to SSRI treatment for major depressive disorder.

    PubMed

    Khodayari-Rostamabad, Ahmad; Reilly, James P; Hasey, Gary M; de Bruin, Hubert; Maccrimmon, Duncan J

    2013-10-01

    The problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD). A relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a "leave-n-out" randomized permutation cross-validation procedure. A list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%]. These results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG. The proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. STAR-GALAXY CLASSIFICATION IN MULTI-BAND OPTICAL IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadely, Ross; Willman, Beth; Hogg, David W.

    2012-11-20

    Ground-based optical surveys such as PanSTARRS, DES, and LSST will produce large catalogs to limiting magnitudes of r {approx}> 24. Star-galaxy separation poses a major challenge to such surveys because galaxies-even very compact galaxies-outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM <0.2 arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven support vector machines (SVMs). For template fitting, we use a maximum likelihood (ML) method and a new hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training datamore » to classify unknown sources; ML and HB do not. We consider (1) a best-case scenario (SVM{sub best}) where the training data are (unrealistically) a random sampling of the data in both signal-to-noise and demographics and (2) a more realistic scenario where training is done on higher signal-to-noise data (SVM{sub real}) at brighter apparent magnitudes. Testing with COSMOS ugriz data, we find that HB outperforms ML, delivering {approx}80% completeness, with purity of {approx}60%-90% for both stars and galaxies. We find that no algorithm delivers perfect performance and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM{sub best}, HB, ML, and SVM{sub real}. We conclude, therefore, that a well-trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, HB template fitting may prove to be the optimal classification method in future surveys.« less

  8. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  9. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Spatial design and strength of spatial signal: Effects on covariance estimation

    USGS Publications Warehouse

    Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.

    2007-01-01

    In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.

  11. High creatinine clearance in critically ill patients with community-acquired acute infectious meningitis.

    PubMed

    Lautrette, Alexandre; Phan, Thuy-Nga; Ouchchane, Lemlih; Aithssain, Ali; Tixier, Vincent; Heng, Anne-Elisabeth; Souweine, Bertrand

    2012-09-27

    A high dose of anti-infective agents is recommended when treating infectious meningitis. High creatinine clearance (CrCl) may affect the pharmacokinetic / pharmacodynamic relationships of anti-infective drugs eliminated by the kidneys. We recorded the incidence of high CrCl in intensive care unit (ICU) patients admitted with meningitis and assessed the diagnostic accuracy of two common methods used to identify high CrCl. Observational study performed in consecutive patients admitted with community-acquired acute infectious meningitis (defined by >7 white blood cells/mm3 in cerebral spinal fluid) between January 2006 and December 2009 to one medical ICU. During the first 7 days following ICU admission, CrCl was measured from 24-hr urine samples (24-hr-UV/P creatinine) and estimated according to Cockcroft-Gault formula and the simplified Modification of Diet in Renal Disease (MDRD) equation. High CrCl was defined as CrCl >140 ml/min/1.73 m2 by 24-hr-UV/P creatinine. Diagnostic accuracy was performed with ROC curves analysis. Thirty two patients were included. High CrCl was present in 8 patients (25%) on ICU admission and in 15 patients (47%) during the first 7 ICU days for a median duration of 3 (1-4) days. For the Cockcroft-Gault formula, the best threshold to predict high CrCl was 101 ml/min/1.73 m2 (sensitivity: 0.96, specificity: 0.75, AUC = 0.90 ± 0.03) with a negative likelihood ratio of 0.06. For the simplified MDRD equation, the best threshold to predict high CrCl was 108 ml/min/1.73 m2 (sensitivity: 0.91, specificity: 0.80, AUC = 0.88 ± 0.03) with a negative likelihood ratio of 0.11. There was no difference between the estimated methods in the diagnostic accuracy of identifying high CrCl (p = 0.30). High CrCl is frequently observed in ICU patients admitted with community-acquired acute infectious meningitis. The estimated methods of CrCl could be used as a screening tool to identify high CrCl.

  12. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  13. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  14. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  15. Depth of interaction decoding of a continuous crystal detector module.

    PubMed

    Ling, T; Lewellen, T K; Miyaoka, R S

    2007-04-21

    We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.

  16. Towards a brief definition of burnout syndrome by subtypes: Development of the "Burnout Clinical Subtypes Questionnaire" (BCSQ-12)

    PubMed Central

    2011-01-01

    Background Burnout has traditionally been described by means of the dimensions of exhaustion, cynicism and lack of eficacy from the "Maslach Burnout Inventory-General Survey" (MBI-GS). The "Burnout Clinical Subtype Questionnaire" (BCSQ-12), comprising the dimensions of overload, lack of development and neglect, is proposed as a brief means of identifying the different ways this disorder is manifested. The aim of the study is to test the construct and criterial validity of the BCSQ-12. Method A cross-sectional design was used on a multi-occupational sample of randomly selected university employees (n = 826). An exploratory factor analysis (EFA) was performed on half of the sample using the maximum likelihood (ML) method with varimax orthogonal rotation, while confirmatory factor analysis (CFA) was performed on the other half by means of the ML method. ROC curve analysis was preformed in order to assess the discriminatory capacity of BCSQ-12 when compared to MBI-GS. Cut-off points were proposed for the BCSQ-12 that optimized sensitivity and specificity. Multivariate binary logistic regression models were used to estimate effect size as an odds ratio (OR) adjusted for sociodemographic and occupational variables. Contrasts for sex and occupation were made using Mann-Whitney U and Kruskall-Wallis tests on the dimensions of both models. Results EFA offered a solution containing 3 factors with eigenvalues > 1, explaining 73.22% of variance. CFA presented the following indices: χ2 = 112.04 (p < 0.001), χ2/gl = 2.44, GFI = 0.958, AGFI = 0.929, RMSEA = 0.059, SRMR = 0.057, NFI = 0.958, NNFI = 0.963, IFI = 0.975, CFI = 0.974. The area under the ROC curve for 'overload' with respect to the 'exhaustion' was = 0.75 (95% CI = 0.71-0.79); it was = 0.80 (95% CI = 0.76-0.86) for 'lack of development' with respect to 'cynicism' and = 0.74 (95% CI = 0.70-0.78) for 'neglect' with respect to 'inefficacy'. The presence of 'overload' increased the likelihood of suffering from 'exhaustion' (OR = 5.25; 95% IC = 3.62-7.60); 'lack of development' increased the likelihood from 'cynicism' (OR = 6.77; 95% CI = 4.79-9.57); 'neglect' increased the likelihood from 'inefficacy' (OR = 5.21; 95% CI = 3.57-7.60). No differences were found with regard to sex, but there were differences depending on occupation. Conclusions Our results support the validity of the definition of burnout proposed in the BSCQ-12 through the brief differentiation of clinical subtypes. PMID:21933381

  17. Phylogeny of the cycads based on multiple single-copy nuclear genes: congruence of concatenated parsimony, likelihood and species tree inference methods

    PubMed Central

    Salas-Leiva, Dayana E.; Meerow, Alan W.; Calonje, Michael; Griffith, M. Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W.; Lewis, Carl E.; Namoff, Sandra

    2013-01-01

    Background and aims Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree–species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. Methods DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree–species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Key Results Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia–Lepidozamia–Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. Conclusions A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial classification of Zamiaceae. PMID:23997230

  18. Comparing the Performance of Improved Classify-Analyze Approaches For Distal Outcomes in Latent Profile Analysis

    PubMed Central

    Dziak, John J.; Bray, Bethany C.; Zhang, Jieting; Zhang, Minqiang; Lanza, Stephanie T.

    2016-01-01

    Several approaches are available for estimating the relationship of latent class membership to distal outcomes in latent profile analysis (LPA). A three-step approach is commonly used, but has problems with estimation bias and confidence interval coverage. Proposed improvements include the correction method of Bolck, Croon, and Hagenaars (BCH; 2004), Vermunt’s (2010) maximum likelihood (ML) approach, and the inclusive three-step approach of Bray, Lanza, & Tan (2015). These methods have been studied in the related case of latent class analysis (LCA) with categorical indicators, but not as well studied for LPA with continuous indicators. We investigated the performance of these approaches in LPA with normally distributed indicators, under different conditions of distal outcome distribution, class measurement quality, relative latent class size, and strength of association between latent class and the distal outcome. The modified BCH implemented in Latent GOLD had excellent performance. The maximum likelihood and inclusive approaches were not robust to violations of distributional assumptions. These findings broadly agree with and extend the results presented by Bakk and Vermunt (2016) in the context of LCA with categorical indicators. PMID:28630602

  19. Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds

    NASA Astrophysics Data System (ADS)

    Wesemann, Stefan; Marzetta, Thomas L.

    2017-12-01

    For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.

  20. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  1. Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.

    PubMed

    Mao, Tianqi; Wang, Zhaocheng; Wang, Qi

    2017-01-23

    Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.

  2. Measurement of admixture proportions and description of admixture structure in different US populations

    PubMed Central

    Halder, Indrani; Yang, Bao-Zhu; Kranzler, Henry R.; Stein, Murray B.; Shriver, Mark D.; Gelernter, Joel

    2010-01-01

    Variation in individual admixture proportions leads to heterogeneity within populations. Though novel methods and marker panels have been developed to quantify individual admixture, empirical data describing individual admixture distributions are limited. We investigated variation in individual admixture in four US populations [European American (EA), African American (AA) and Hispanics from Connecticut (EC) and California (WC)] assuming three-way intermixture among Europeans, Africans and Indigenous Americans. Admixture estimates were inferred using a panel of 36 microsatellites and 1 SNP, which have significant allele frequency differences between ancestral populations, and by using both a maximum likelihood (ML) based method and a Bayesian method implemented in the program STRUCTURE. Simulation studies showed that estimates obtained with this marker panel are within 96% of expected values. EAs had the lowest non-European admixture with both methods, but showed greater homogeneity with STRUCTURE than with ML. All other samples showed a high degree of variation in admixture estimates with both methods, were highly concordant and showed evidence of admixture stratification. With both methods, AA subjects had 16% European and <10% Indigenous American admixture on average. EC Hispanics had higher mean African admixture and the WC Hispanics higher mean Indigenous American admixture, possibly reflecting their different continental origins. PMID:19572378

  3. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level

    PubMed Central

    Savalei, Victoria; Rhemtulla, Mijke

    2017-01-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371

  4. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    PubMed

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  5. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  6. Detecting Candida albicans in human milk.

    PubMed

    Morrill, Jimi Francis; Pappagianis, Demosthenes; Heinig, M Jane; Lönnerdal, Bo; Dewey, Kathryn G

    2003-01-01

    Procedures for diagnosis of mammary candidosis, including laboratory confirmation, are not well defined. Lactoferrin present in human milk can inhibit growth of Candida albicans, thereby limiting the ability to detect yeast infections. The inhibitory effect of various lactoferrin concentrations on the growth of C. albicans in whole human milk was studied. The addition of iron to the milk led to a two- to threefold increase in cell counts when milk contained 3.0 mg of lactoferrin/ml and markedly reduced the likelihood of false-negative culture results. This method may provide the necessary objective support needed for diagnosis of mammary candidosis.

  7. A new maximum-likelihood change estimator for two-pass SAR coherent change detection

    DOE PAGES

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...

    2016-01-11

    In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  8. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  9. Long-Branch Attraction Bias and Inconsistency in Bayesian Phylogenetics

    PubMed Central

    Kolaczkowski, Bryan; Thornton, Joseph W.

    2009-01-01

    Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias—which is apparent under both controlled simulation conditions and in analyses of empirical sequence data—also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages—that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis. PMID:20011052

  10. Long-branch attraction bias and inconsistency in Bayesian phylogenetics.

    PubMed

    Kolaczkowski, Bryan; Thornton, Joseph W

    2009-12-09

    Bayesian inference (BI) of phylogenetic relationships uses the same probabilistic models of evolution as its precursor maximum likelihood (ML), so BI has generally been assumed to share ML's desirable statistical properties, such as largely unbiased inference of topology given an accurate model and increasingly reliable inferences as the amount of data increases. Here we show that BI, unlike ML, is biased in favor of topologies that group long branches together, even when the true model and prior distributions of evolutionary parameters over a group of phylogenies are known. Using experimental simulation studies and numerical and mathematical analyses, we show that this bias becomes more severe as more data are analyzed, causing BI to infer an incorrect tree as the maximum a posteriori phylogeny with asymptotically high support as sequence length approaches infinity. BI's long branch attraction bias is relatively weak when the true model is simple but becomes pronounced when sequence sites evolve heterogeneously, even when this complexity is incorporated in the model. This bias--which is apparent under both controlled simulation conditions and in analyses of empirical sequence data--also makes BI less efficient and less robust to the use of an incorrect evolutionary model than ML. Surprisingly, BI's bias is caused by one of the method's stated advantages--that it incorporates uncertainty about branch lengths by integrating over a distribution of possible values instead of estimating them from the data, as ML does. Our findings suggest that trees inferred using BI should be interpreted with caution and that ML may be a more reliable framework for modern phylogenetic analysis.

  11. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  12. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  13. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    PubMed

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  14. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  15. Applying six classifiers to airborne hyperspectral imagery for detecting giant reed

    USDA-ARS?s Scientific Manuscript database

    This study evaluated and compared six different image classifiers, including minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), mixture tuned matched filtering (MTMF) and support vector machine (SVM), for detecting and mapping giant reed (Arundo...

  16. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    PubMed

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  17. Data Format Classification for Autonomous Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    2005-01-01

    We present maximum-likelihood (ML) coherent and noncoherent classifiers for discriminating between NRZ and Manchester coded (biphase-L) data formats for binary phase-shift-keying (BPSK) modulation. Such classification of the data format is an essential element of so-called autonomous software defined radio (SDR) receivers (similar to so-called cognitive SDR receivers in the military application) where it is desired that the receiver perform each of its functions by extracting the appropriate knowledge from the received signal and, if possible, with as little information of the other signal parameters as possible. Small and large SNR approximations to the ML classifiers are also proposed that lead to simpler implementation with comparable performance in their respective SNR regions. Numerical performance results obtained by a combination of computer simulation and, wherever possible, theoretical analyses, are presented and comparisons are made among the various configurations based on the probability of misclassification as a performance criterion. Extensions to other modulations such as QPSK are readily accomplished using the same methods described in the paper.

  18. Diagnostic accuracy of a novel software technology for detecting pneumothorax in a porcine model.

    PubMed

    Summers, Shane M; Chin, Eric J; April, Michael D; Grisell, Ronald D; Lospinoso, Joshua A; Kheirabadi, Bijan S; Salinas, Jose; Blackbourne, Lorne H

    2017-09-01

    Our objective was to measure the diagnostic accuracy of a novel software technology to detect pneumothorax on Brightness (B) mode and Motion (M) mode ultrasonography. Ultrasonography fellowship-trained emergency physicians performed thoracic ultrasonography at baseline and after surgically creating a pneumothorax in eight intubated, spontaneously breathing porcine subjects. Prior to pneumothorax induction, we captured sagittal M-mode still images and B-mode videos of each intercostal space with a linear array transducer at 4cm of depth. After collection of baseline images, we placed a chest tube, injected air into the pleural space in 250mL increments, and repeated the ultrasonography for pneumothorax volumes of 250mL, 500mL, 750mL, and 1000mL. We confirmed pneumothorax with intrapleural digital manometry and ultrasound by expert sonographers. We exported collected images for interpretation by the software. We treated each individual scan as a single test for interpretation by the software. Excluding indeterminate results, we collected 338M-mode images for which the software demonstrated a sensitivity of 98% (95% confidence interval [CI] 92-99%), specificity of 95% (95% CI 86-99), positive likelihood ratio (LR+) of 21.6 (95% CI 7.1-65), and negative likelihood ratio (LR-) of 0.02 (95% CI 0.008-0.046). Among 364 B-mode videos, the software demonstrated a sensitivity of 86% (95% CI 81-90%), specificity of 85% (81-91%), LR+ of 5.7 (95% CI 3.2-10.2), and LR- of 0.17 (95% CI 0.12-0.22). This novel technology has potential as a useful adjunct to diagnose pneumothorax on thoracic ultrasonography. Published by Elsevier Inc.

  19. Comparison of Methods for Analyzing Left-Censored Occupational Exposure Data

    PubMed Central

    Huynh, Tran; Ramachandran, Gurumurthy; Banerjee, Sudipto; Monteiro, Joao; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2014-01-01

    The National Institute for Environmental Health Sciences (NIEHS) is conducting an epidemiologic study (GuLF STUDY) to investigate the health of the workers and volunteers who participated from April to December of 2010 in the response and cleanup of the oil release after the Deepwater Horizon explosion in the Gulf of Mexico. The exposure assessment component of the study involves analyzing thousands of personal monitoring measurements that were collected during this effort. A substantial portion of these data has values reported by the analytic laboratories to be below the limits of detection (LOD). A simulation study was conducted to evaluate three established methods for analyzing data with censored observations to estimate the arithmetic mean (AM), geometric mean (GM), geometric standard deviation (GSD), and the 95th percentile (X0.95) of the exposure distribution: the maximum likelihood (ML) estimation, the β-substitution, and the Kaplan–Meier (K-M) methods. Each method was challenged with computer-generated exposure datasets drawn from lognormal and mixed lognormal distributions with sample sizes (N) varying from 5 to 100, GSDs ranging from 2 to 5, and censoring levels ranging from 10 to 90%, with single and multiple LODs. Using relative bias and relative root mean squared error (rMSE) as the evaluation metrics, the β-substitution method generally performed as well or better than the ML and K-M methods in most simulated lognormal and mixed lognormal distribution conditions. The ML method was suitable for large sample sizes (N ≥ 30) up to 80% censoring for lognormal distributions with small variability (GSD = 2–3). The K-M method generally provided accurate estimates of the AM when the censoring was <50% for lognormal and mixed distributions. The accuracy and precision of all methods decreased under high variability (GSD = 4 and 5) and small to moderate sample sizes (N < 20) but the β-substitution was still the best of the three methods. When using the ML method, practitioners are cautioned to be aware of different ways of estimating the AM as they could lead to biased interpretation. A limitation of the β-substitution method is the absence of a confidence interval for the estimate. More research is needed to develop methods that could improve the estimation accuracy for small sample sizes and high percent censored data and also provide uncertainty intervals. PMID:25261453

  20. Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Boteva, Silvena

    2016-10-01

    Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.

  1. Comparison of methods for H*(10) calculation from measured LaBr3(Ce) detector spectra.

    PubMed

    Vargas, A; Cornejo, N; Camp, A

    2018-07-01

    The Universitat Politecnica de Catalunya (UPC) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) have evaluated methods based on stripping, conversion coefficients and Maximum Likelihood Estimation using Expectation Maximization (ML-EM) in calculating the H*(10) rates from photon pulse-height spectra acquired with a spectrometric LaBr 3 (Ce)(1.5″ × 1.5″) detector. There is a good agreement between results of the different H*(10) rate calculation methods using the spectra measured at the UPC secondary standard calibration laboratory in Barcelona. From the outdoor study at ESMERALDA station in Madrid, it can be concluded that the analysed methods provide results quite similar to those obtained with the reference RSS ionization chamber. In addition, the spectrometric detectors can also facilitate radionuclide identification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Phylogenetically marking the limits of the genus Fusarium for post-Article 59 usage

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most important and systematically challenging groups of mycotoxigenic, plant pathogenic, and human pathogenic fungi. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial nucleotide sequences of genes encod...

  3. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  4. Cramer-Rao bound analysis of wideband source localization and DOA estimation

    NASA Astrophysics Data System (ADS)

    Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung

    2002-12-01

    In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.

  5. An improved image non-blind image deblurring method based on FoEs

    NASA Astrophysics Data System (ADS)

    Zhu, Qidan; Sun, Lei

    2013-03-01

    Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.

  6. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  7. A Recommended Procedure for Estimating the Cosmic-Ray Spectral Parameter of a Simple Power Law With Applications to Detector Design

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2001-01-01

    A simple power law model consisting of a single spectral index alpha-1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV. Two procedures for estimating alpha-1 the method of moments and maximum likelihood (ML), are developed and their statistical performance compared. It is concluded that the ML procedure attains the most desirable statistical properties and is hence the recommended statistical estimation procedure for estimating alpha-1. The ML procedure is then generalized for application to a set of real cosmic-ray data and thereby makes this approach applicable to existing cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution, as well as inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives. This is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  8. Diagnostic accuracy of second-generation dual-source computed tomography coronary angiography with iterative reconstructions: a real-world experience.

    PubMed

    Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F

    2012-08-01

    The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.

  9. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  10. Maximum-likelihood spectral estimation and adaptive filtering techniques with application to airborne Doppler weather radar. Thesis Technical Report No. 20

    NASA Technical Reports Server (NTRS)

    Lai, Jonathan Y.

    1994-01-01

    This dissertation focuses on the signal processing problems associated with the detection of hazardous windshears using airborne Doppler radar when weak weather returns are in the presence of strong clutter returns. In light of the frequent inadequacy of spectral-processing oriented clutter suppression methods, we model a clutter signal as multiple sinusoids plus Gaussian noise, and propose adaptive filtering approaches that better capture the temporal characteristics of the signal process. This idea leads to two research topics in signal processing: (1) signal modeling and parameter estimation, and (2) adaptive filtering in this particular signal environment. A high-resolution, low SNR threshold maximum likelihood (ML) frequency estimation and signal modeling algorithm is devised and proves capable of delineating both the spectral and temporal nature of the clutter return. Furthermore, the Least Mean Square (LMS) -based adaptive filter's performance for the proposed signal model is investigated, and promising simulation results have testified to its potential for clutter rejection leading to more accurate estimation of windspeed thus obtaining a better assessment of the windshear hazard.

  11. Evaluation of Shiryaev-Roberts Procedure for On-line Environmental Radiation Monitoring

    NASA Astrophysics Data System (ADS)

    Watson, Mara Mae

    An on-line radiation monitoring system that simultaneously concentrates and detects radioactivity is needed to detect an accidental leakage from a nuclear waste disposal facility or clandestine nuclear activity. Previous studies have shown that classical control chart methods can be applied to on-line radiation monitoring data to quickly detect these events as they occur; however, Bayesian control chart methods were not included in these studies. This work will evaluate the performance of a Bayesian control chart method, the Shiryaev-Roberts (SR) procedure, compared to classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), for use in on-line radiation monitoring of 99Tc in water using extractive scintillating resin. Measurements were collected by pumping solutions containing 0.1-5 Bq/L of 99Tc, as 99T cO4-, through a flow cell packed with extractive scintillating resin coupled to a Beta-RAM Model 5 HPLC detector. While 99T cO4- accumulated on the resin, simultaneous measurements were acquired in 10-s intervals and then re-binned to 100-s intervals. The Bayesian statistical method, Shiryaev-Roberts procedure, and classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), were applied to the data using statistical algorithms developed in MATLAB RTM. Two SR control charts were constructed using Poisson distributions and Gaussian distributions to estimate the likelihood ratio, and are referred to as Poisson SR and Gaussian SR to indicate the distribution used to calculate the statistic. The Poisson and Gaussian SR methods required as little as 28.9 mL less solution at 5 Bq/L and as much as 170 mL less solution at 0.5 Bq/L to exceed the control limit than the Shewhart 3-sigma method. The Poisson SR method needed as little as 6.20 mL less solution at 5 Bq/L and up to 125 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. The Gaussian SR and CUSUM method required comparable solution volumes for test solutions containing at least 1.5 Bq/L of 99T c. For activity concentrations less than 1.5 Bq/L, the Gaussian SR method required as much as 40.8 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. Both SR methods were able to consistently detect test solutions containing 0.1 Bq/L, unlike the Shewhart 3-sigma and CUSUM methods. Although the Poisson SR method required as much as 178 mL less solution to exceed the control limit than the Gaussian SR method, the Gaussian SR false positive of 0% was much lower than the Poisson SR false positive rate of 1.14%. A lower false positive rate made it easier to differentiate between a false positive and an increase in mean count rate caused by activity accumulating on the resin. The SR procedure is thus the ideal tool for low-level on-line radiation monitoring using extractive scintillating resin, because it needed less volume in most cases to detect an upward shift in the mean count rate than the Shewhart 3-sigma and CUSUM methods and consistently detected lower activity concentrations. The desired results for the monitoring scheme, however, need to be considered prior to choosing between the Poisson and Gaussian distribution to estimate the likelihood ratio, because each was advantageous under different circumstances. Once the control limit was exceeded, activity concentrations were estimated from the SR control chart using the slope of the control chart on a semi-logarithmic plot. Five of nine test solutions for the Poisson SR control chart produced concentration estimates within 30% of the actual value, but the worst case was 263.2% different than the actual value. The estimations for the Gaussian SR control chart were much more precise, with six of eight solutions producing estimates within 30%. Although the activity concentrations estimations were only mediocre for the Poisson SR control chart and satisfactory for the Gaussian SR control chart, these results demonstrate that a relationship exists between activity concentration and the SR control chart magnitude that can be exploited to determine the activity concentration from the SR control chart. More complex methods should be investigated to improve activity concentration estimations from the SR control charts.

  12. Fluorescein angiography vs. optical coherence tomography for diagnosis of uveitic macular edema

    PubMed Central

    Kempen, John H.; Sugar, Elizabeth A.; Jaffe, Glenn J.; Acharya, Nisha R.; Dunn, James P.; Elner, Susan G.; Lightman, Susan L.; Thorne, Jennifer E.; Vitale, Albert T.; Altaweel, Michael M.

    2013-01-01

    Objective To evaluate agreement between fluorescein angiography (FA) and optical coherence tomography (OCT) for diagnosis of macular edema in patients with uveitis. Design Multicenter cross-sectional study Participants Four hundred seventy-nine eyes with uveitis of 255 patients Methods The macular status of dilated eyes with intermediate, posterior or panuveitis was assessed via Stratus-3 OCT and FA. Kappa statistics evaluated agreement between the diagnostic approaches. Main Outcome Measures Macular thickening (center point thickness ≥240 μm per reading center grading of OCT images-“MT”) and macular leakage (central subfield fluorescein leakage ≥0.44 disk areas per reading center grading of FA images-“ML”); agreement amongst these outcomes in diagnosing “macular edema.” Results OCT (90.4%) more frequently returned usable information regarding macular edema than FA (77%) and biomicroscopy (76%). Agreement in diagnosis of MT and ML (κ=0.44) was moderate. ML was present in 40% of cases free of MT, whereas MT was present in 34% of cases without ML. Biomicroscopic evaluation for macular edema failed to detect 40% and 45% of cases of MT and ML respectively and diagnosed 17% and 17% of cases with macular edema which did not have MT or ML respectively; these results may underestimate biomicroscopic errors (ophthalmologists were not explicitly masked to OCT and FA results). Among eyes free of ML, phakic eyes without cataract rarely (4%) had MT. No factors were found that effectively ruled out ML when MT was absent. Conclusion OCT and FA offered only moderate agreement regarding macular edema status in uveitis cases, probably because what they measure (MT and ML) are related but non-identical macular pathologies. Given its lower cost, greater safety, and greater likelihood of obtaining usable information, OCT may be the best initial test for evaluation of suspected macular edema. However, given that ML cannot be ruled out if MT is absent and vice versa, obtaining the second test after a negative result on the first seems justified when detection of ML or MT would alter management. Given that biomicroscopic evaluation for macular edema frequently erred, ancillary testing for macular edema seems indicated when knowledge of ML or MT status would affect management. PMID:23706700

  13. Self-configurable radio receiver system and method for use with signals without prior knowledge of signal defining characteristics

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon (Inventor); Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor); Dolinar, Samuel J. (Inventor); Tkacenko, Andre (Inventor)

    2013-01-01

    A method, radio receiver, and system to autonomously receive and decode a plurality of signals having a variety of signal types without a priori knowledge of the defining characteristics of the signals is disclosed. The radio receiver is capable of receiving a signal of an unknown signal type and, by estimating one or more defining characteristics of the signal, determine the type of signal. The estimated defining characteristic(s) is/are utilized to enable the receiver to determine other defining characteristics. This in turn, enables the receiver, through multiple iterations, to make a maximum-likelihood (ML) estimate for each of the defining characteristics. After the type of signal is determined by its defining characteristics, the receiver selects an appropriate decoder from a plurality of decoders to decode the signal.

  14. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  15. A Test-Length Correction to the Estimation of Extreme Proficiency Levels

    ERIC Educational Resources Information Center

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2011-01-01

    In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…

  16. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  17. Comparison of Radio Frequency Distinct Native Attribute and Matched Filtering Techniques for Device Discrimination and Operation Identification

    DTIC Science & Technology

    identification. URE from ten MSP430F5529 16-bit microcontrollers were analyzed using: 1) RF distinct native attributes (RF-DNA) fingerprints paired with multiple...discriminant analysis/maximum likelihood (MDA/ML) classification, 2) RF-DNA fingerprints paired with generalized relevance learning vector quantized

  18. Phylogenetic analyses of RPB1 and RPB2 support a middle Cretaceous origin for a clade comprising all agriculturally and medically important fusaria

    USDA-ARS?s Scientific Manuscript database

    Fusarium (Hypocreales, Nectriaceae) is one of the most economically important and systematically challenging groups of mycotoxigenic phytopathogens and emergent human pathogens. We conducted maximum likelihood (ML), maximum parsimony (MP) and Bayesian (B) analyses on partial RNA polymerase largest (...

  19. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  20. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  1. RECTAL-SPECIFIC MICROBICIDE APPLICATOR: EVALUATION AND COMPARISON WITH A VAGINAL APPLICATOR USED RECTALLY

    PubMed Central

    Carballo-Diéguez, Alex; Giguere, Rebecca; Dolezal, Curtis; Bauermeister, José; Leu, Cheng-Shiun; Valladares, Juan; Rohan, Lisa C.; Anton, Peter A.; Cranston, Ross D.; Febo, Irma; Mayer, Kenneth; McGowan, Ian

    2014-01-01

    An applicator designed for rectal delivery of microbicides was tested for acceptability by 95 young men who have sex with men, who self-administered 4mL of placebo gel prior to receptive anal intercourse over 90 days. Subsequently, 24 of the participants self-administered rectally 4mL of tenofovir or placebo gel over 7 days using a vaginal applicator, and compared both applicators on a Likert scale of 1–10, with 10 the highest rating. Participants reported high likelihood to use either applicator in the future (mean scores 9.3 and 8.8 respectively, p= ns). Those who tested both liked the vaginal applicator significantly more than the rectal applicator (7.8 vs. 5.2, p=0.003). Improvements in portability, conspicuousness, aesthetics, tip comfort, product assembly and packaging were suggested for both. This rectal-specific applicator was not superior to a vaginal applicator. While likelihood of future use is reportedly high, factors that decrease acceptability may erode product use over time in clinical trials. Further attention is needed to develop user-friendly, quick-acting rectal microbicide delivery systems. PMID:24858481

  2. Spatial hydrological drought characteristics in Karkheh River basin, southwest Iran using copulas

    NASA Astrophysics Data System (ADS)

    Dodangeh, Esmaeel; Shahedi, Kaka; Shiau, Jenq-Tzong; MirAkbari, Maryam

    2017-08-01

    Investigation on drought characteristics such as severity, duration, and frequency is crucial for water resources planning and management in a river basin. While the methodology for multivariate drought frequency analysis is well established by applying the copulas, the estimation on the associated parameters by various parameter estimation methods and the effects on the obtained results have not yet been investigated. This research aims at conducting a comparative analysis between the maximum likelihood parametric and non-parametric method of the Kendall τ estimation method for copulas parameter estimation. The methods were employed to study joint severity-duration probability and recurrence intervals in Karkheh River basin (southwest Iran) which is facing severe water-deficit problems. Daily streamflow data at three hydrological gauging stations (Tang Sazbon, Huleilan and Polchehr) near the Karkheh dam were used to draw flow duration curves (FDC) of these three stations. The Q_{75} index extracted from the FDC were set as threshold level to abstract drought characteristics such as drought duration and severity on the basis of the run theory. Drought duration and severity were separately modeled using the univariate probabilistic distributions and gamma-GEV, LN2-exponential, and LN2-gamma were selected as the best paired drought severity-duration inputs for copulas according to the Akaike Information Criteria (AIC), Kolmogorov-Smirnov and chi-square tests. Archimedean Clayton, Frank, and extreme value Gumbel copulas were employed to construct joint cumulative distribution functions (JCDF) of droughts for each station. Frank copula at Tang Sazbon and Gumbel at Huleilan and Polchehr stations were identified as the best copulas based on the performance evaluation criteria including AIC, BIC, log-likelihood and root mean square error (RMSE) values. Based on the RMSE values, nonparametric Kendall-τ is preferred to the parametric maximum likelihood estimation method. The results showed greater drought return periods by the parametric ML method in comparison to the nonparametric Kendall τ estimation method. The results also showed that stations located in tributaries (Huleilan and Polchehr) have close return periods, while the station along the main river (Tang Sazbon) has the smaller return periods for the drought events with identical drought duration and severity.

  3. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    NASA Astrophysics Data System (ADS)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

  4. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  5. Impact of D-Dimer for Prediction of Incident Occult Cancer in Patients with Unprovoked Venous Thromboembolism.

    PubMed

    Han, Donghee; ó Hartaigh, Bríain; Lee, Ji Hyun; Cho, In-Jeong; Shim, Chi Young; Chang, Hyuk-Jae; Hong, Geu-Ru; Ha, Jong-Won; Chung, Namsik

    2016-01-01

    Unprovoked venous thromboembolism (VTE) is related to a higher incidence of occult cancer. D-dimer is clinically used for screening VTE, and has often been shown to be present in patients with malignancy. We explored the predictive value of D-dimer for detecting occult cancer in patients with unprovoked VTE. We retrospectively examined data from 824 patients diagnosed with deep vein thrombosis or pulmonary thromboembolism. Of these, 169 (20.5%) patients diagnosed with unprovoked VTE were selected to participate in this study. D-dimer was categorized into three groups as: <2,000, 2,000-4,000, and >4,000 ng/ml. Cox regression analysis was employed to estimate the odds of occult cancer and metastatic state of cancer according to D-dimer categories. During a median 5.3 (interquartile range: 3.4-6.7) years of follow-up, 24 (14%) patients with unprovoked VTE were diagnosed with cancer. Of these patients, 16 (67%) were identified as having been diagnosed with metastatic cancer. Log transformed D-dimer levels were significantly higher in those with occult cancer as compared with patients without diagnosis of occult cancer (3.5±0.5 vs. 3.2±0.5, P-value = 0.009, respectively). D-dimer levels >4,000 ng/ml was independently associated with occult cancer (HR: 4.12, 95% CI: 1.54-11.04, P-value = 0.005) when compared with D-dimer levels <2,000 ng/ml, even after adjusting for age, gender, and type of VTE (e.g., deep vein thrombosis or pulmonary thromboembolism). D-dimer levels >4000 ng/ml were also associated with a higher likelihood of metastatic cancer (HR: 9.55, 95% CI: 2.46-37.17, P-value <0.001). Elevated D-dimer concentrations >4000 ng/ml are independently associated with the likelihood of occult cancer among patients with unprovoked VTE.

  6. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  7. Blind Compensation of I/Q Impairments in Wireless Transceivers

    PubMed Central

    Aziz, Mohsin; Ghannouchi, Fadhel M.; Helaoui, Mohamed

    2017-01-01

    The majority of techniques that deal with the mitigation of in-phase and quadrature-phase (I/Q) imbalance at the transmitter (pre-compensation) require long training sequences, reducing the throughput of the system. These techniques also require a feedback path, which adds more complexity and cost to the transmitter architecture. Blind estimation techniques are attractive for avoiding the use of long training sequences. In this paper, we propose a blind frequency-independent I/Q imbalance compensation method based on the maximum likelihood (ML) estimation of the imbalance parameters of a transceiver. A closed-form joint probability density function (PDF) for the imbalanced I and Q signals is derived and validated. ML estimation is then used to estimate the imbalance parameters using the derived joint PDF of the output I and Q signals. Various figures of merit have been used to evaluate the efficacy of the proposed approach using extensive computer simulations and measurements. Additionally, the bit error rate curves show the effectiveness of the proposed method in the presence of the wireless channel and Additive White Gaussian Noise. Real-world experimental results show an image rejection of greater than 30 dB as compared to the uncompensated system. This method has also been found to be robust in the presence of practical system impairments, such as time and phase delay mismatches. PMID:29257081

  8. Perceptual precision of passive body tilt is consistent with statistically optimal cue integration

    PubMed Central

    Karmali, Faisal; Nicoucar, Keyvan; Merfeld, Daniel M.

    2017-01-01

    When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model. PMID:28179477

  9. Maximum Likelihood Implementation of an Isolation-with-Migration Model for Three Species.

    PubMed

    Dalquen, Daniel A; Zhu, Tianqi; Yang, Ziheng

    2017-05-01

    We develop a maximum likelihood (ML) method for estimating migration rates between species using genomic sequence data. A species tree is used to accommodate the phylogenetic relationships among three species, allowing for migration between the two sister species, while the third species is used as an out-group. A Markov chain characterization of the genealogical process of coalescence and migration is used to integrate out the migration histories at each locus analytically, whereas Gaussian quadrature is used to integrate over the coalescent times on each genealogical tree numerically. This is an extension of our early implementation of the symmetrical isolation-with-migration model for three species to accommodate arbitrary loci with two or three sequences per locus and to allow asymmetrical migration rates. Our implementation can accommodate tens of thousands of loci, making it feasible to analyze genome-scale data sets to test for gene flow. We calculate the posterior probabilities of gene trees at individual loci to identify genomic regions that are likely to have been transferred between species due to gene flow. We conduct a simulation study to examine the statistical properties of the likelihood ratio test for gene flow between the two in-group species and of the ML estimates of model parameters such as the migration rate. Inclusion of data from a third out-group species is found to increase dramatically the power of the test and the precision of parameter estimation. We compiled and analyzed several genomic data sets from the Drosophila fruit flies. Our analyses suggest no migration from D. melanogaster to D. simulans, and a significant amount of gene flow from D. simulans to D. melanogaster, at the rate of ~0.02 migrant individuals per generation. We discuss the utility of the multispecies coalescent model for species tree estimation, accounting for incomplete lineage sorting and migration. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Outcomes of Multiple Listing for Adult Heart Transplantation in the United States: Analysis of OPTN Data from 2000 to 2013

    PubMed Central

    Givens, Raymond C.; Dardas, Todd; Clerkin, Kevin J.; Restaino, Susan; Schulze, P. Christian; Mancini, Donna M.

    2015-01-01

    Background Heart transplant (HT) candidates in the U.S. may register at multiple centers. Not all candidates have the resources and mobility needed for multiple-listing; thus this policy may advantage wealthier and less sick patients. Objectives We assessed the association of multiple-listing with waitlist outcomes and post-HT survival. Methods We identified 33,928 adult candidates for a first single-organ HT between January 1, 2000 and December 31, 2013 in the OPTN database. Results We identified 679 multiple-listed candidates (ML, 2.0%), who were younger (median age 53 years [IQR 43–60] vs. 55 [45–61], p <0.0001), more often white (76.4% vs 70.7%, p =0.0010) and privately insured (65.5% vs 56.3%, p <0.0001), and lived in ZIP codes with higher median incomes (90,153 [25,471-253,831] vs 68,986 [19,471-219,702], p =0.0015). Likelihood of ML increased with the primary center’s median waiting time. ML candidates had lower initial priority (39.0% 1A or 1B vs 55.1%, p <0.0001) and predicted 90-day waitlist mortality (2.9% [2.3–4.7] vs 3.6% [2.3–6.0], p <0.0001), but were frequently upgraded at secondary centers (58.2% 1A/1B; p <0.0001 vs ML primary listing). ML candidates had a higher HT rate (74.4% vs 70.2%, p =0.0196) and lower waitlist mortality (8.1% vs 12.2%, p =0.0011). Compared to a propensity-matched cohort, the relative ML HT rate was 3.02 (95% CI 2.59–3.52, p <0.0001). There were no post-HT survival differences. Conclusions Multiple-listing is a rational response to organ shortage but may advantage patients with the means to participate rather than the most medically needy. The multiple-listing policy should be overturned. PMID:26577617

  11. Inferring recent outcrossing rates using multilocus individual heterozygosity: application to evolving wheat populations.

    PubMed Central

    Enjalbert, J; David, J L

    2000-01-01

    Using multilocus individual heterozygosity, a method is developed to estimate the outcrossing rates of a population over a few previous generations. Considering that individuals originate either from outcrossing or from n successive selfing generations from an outbred ancestor, a maximum-likelihood (ML) estimator is described that gives estimates of past outcrossing rates in terms of proportions of individuals with different n values. Heterozygosities at several unlinked codominant loci are used to assign n values to each individual. This method also allows a test of whether populations are in inbreeding equilibrium. The estimator's reliability was checked using simulations for different mating histories. We show that this ML estimator can provide estimates of outcrossing rates for the final generation outcrossing rate (t(0)) and a mean of the preceding rates (t(p)) and can detect major temporal variation in the mating system. The method is most efficient for low to intermediate outcrossing levels. Applied to nine populations of wheat, this method gave estimates of t(0) and t(p). These estimates confirmed the absence of outcrossing t(0) = 0 in the two populations subjected to manual selfing. For free-mating wheat populations, it detected lower final generation outcrossing rates t(0) = 0-0.06 than those expected from global heterozygosity t = 0.02-0.09. This estimator appears to be a new and efficient way to describe the multilocus heterozygosity of a population, complementary to Fis and progeny analysis approaches. PMID:11102388

  12. Comparison of data analysis strategies for intent-to-treat analysis in pre-test-post-test designs with substantial dropout rates.

    PubMed

    Salim, Agus; Mackinnon, Andrew; Christensen, Helen; Griffiths, Kathleen

    2008-09-30

    The pre-test-post-test design (PPD) is predominant in trials of psychotherapeutic treatments. Missing data due to withdrawals present an even bigger challenge in assessing treatment effectiveness under the PPD than under designs with more observations since dropout implies an absence of information about response to treatment. When confronted with missing data, often it is reasonable to assume that the mechanism underlying missingness is related to observed but not to unobserved outcomes (missing at random, MAR). Previous simulation and theoretical studies have shown that, under MAR, modern techniques such as maximum-likelihood (ML) based methods and multiple imputation (MI) can be used to produce unbiased estimates of treatment effects. In practice, however, ad hoc methods such as last observation carried forward (LOCF) imputation and complete-case (CC) analysis continue to be used. In order to better understand the behaviour of these methods in the PPD, we compare the performance of traditional approaches (LOCF, CC) and theoretically sound techniques (MI, ML), under various MAR mechanisms. We show that the LOCF method is seriously biased and conclude that its use should be abandoned. Complete-case analysis produces unbiased estimates only when the dropout mechanism does not depend on pre-test values even when dropout is related to fixed covariates including treatment group (covariate-dependent: CD). However, CC analysis is generally biased under MAR. The magnitude of the bias is largest when the correlation of post- and pre-test is relatively low.

  13. Higher level phylogeny and the first divergence time estimation of Heteroptera (Insecta: Hemiptera) based on multiple genes.

    PubMed

    Li, Min; Tian, Ying; Zhao, Ying; Bu, Wenjun

    2012-01-01

    Heteroptera, or true bugs, are the largest, morphologically diverse and economically important group of insects with incomplete metamorphosis. However, the phylogenetic relationships within Heteroptera are still in dispute and most of the previous studies were based on morphological characters or with single gene (partial or whole 18S rDNA). Besides, so far, divergence time estimates for Heteroptera totally rely on the fossil record, while no studies have been performed on molecular divergence rates. Here, for the first time, we used maximum parsimony (MP), maximum likelihood (ML) and Bayesian inference (BI) with multiple genes (18S rDNA, 28S rDNA, 16S rDNA and COI) to estimate phylogenetic relationships among the infraorders, and meanwhile, the Penalized Likelihood (r8s) and Bayesian (BEAST) molecular dating methods were employed to estimate divergence time of higher taxa of this suborder. Major results of the present study included: Nepomorpha was placed as the most basal clade in all six trees (MP trees, ML trees and Bayesian trees of nuclear gene data and four-gene combined data, respectively) with full support values. The sister-group relationship of Cimicomorpha and Pentatomomorpha was also strongly supported. Nepomorpha originated in early Triassic and the other six infraorders originated in a very short period of time in middle Triassic. Cimicomorpha and Pentatomomorpha underwent a radiation at family level in Cretaceous, paralleling the proliferation of the flowering plants. Our results indicated that the higher-group radiations within hemimetabolous Heteroptera were simultaneously with those of holometabolous Coleoptera and Diptera which took place in the Triassic. While the aquatic habitat was colonized by Nepomorpha already in the Triassic, the Gerromorpha independently adapted to the semi-aquatic habitat in the Early Jurassic.

  14. Higher Level Phylogeny and the First Divergence Time Estimation of Heteroptera (Insecta: Hemiptera) Based on Multiple Genes

    PubMed Central

    Zhao, Ying; Bu, Wenjun

    2012-01-01

    Heteroptera, or true bugs, are the largest, morphologically diverse and economically important group of insects with incomplete metamorphosis. However, the phylogenetic relationships within Heteroptera are still in dispute and most of the previous studies were based on morphological characters or with single gene (partial or whole 18S rDNA). Besides, so far, divergence time estimates for Heteroptera totally rely on the fossil record, while no studies have been performed on molecular divergence rates. Here, for the first time, we used maximum parsimony (MP), maximum likelihood (ML) and Bayesian inference (BI) with multiple genes (18S rDNA, 28S rDNA, 16S rDNA and COI) to estimate phylogenetic relationships among the infraorders, and meanwhile, the Penalized Likelihood (r8s) and Bayesian (BEAST) molecular dating methods were employed to estimate divergence time of higher taxa of this suborder. Major results of the present study included: Nepomorpha was placed as the most basal clade in all six trees (MP trees, ML trees and Bayesian trees of nuclear gene data and four-gene combined data, respectively) with full support values. The sister-group relationship of Cimicomorpha and Pentatomomorpha was also strongly supported. Nepomorpha originated in early Triassic and the other six infraorders originated in a very short period of time in middle Triassic. Cimicomorpha and Pentatomomorpha underwent a radiation at family level in Cretaceous, paralleling the proliferation of the flowering plants. Our results indicated that the higher-group radiations within hemimetabolous Heteroptera were simultaneously with those of holometabolous Coleoptera and Diptera which took place in the Triassic. While the aquatic habitat was colonized by Nepomorpha already in the Triassic, the Gerromorpha independently adapted to the semi-aquatic habitat in the Early Jurassic. PMID:22384163

  15. A Bayesian modification to the Jelinski-Moranda software reliability growth model

    NASA Technical Reports Server (NTRS)

    Littlewood, B.; Sofer, A.

    1983-01-01

    The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed.

  16. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    PubMed

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  17. Evolution of complex fruiting-body morphologies in homobasidiomycetes.

    PubMed Central

    Hibbett, David S; Binder, Manfred

    2002-01-01

    The fruiting bodies of homobasidiomycetes include some of the most complex forms that have evolved in the fungi, such as gilled mushrooms, bracket fungi and puffballs ('pileate-erect') forms. Homobasidiomycetes also include relatively simple crust-like 'resupinate' forms, however, which account for ca. 13-15% of the described species in the group. Resupinate homobasidiomycetes have been interpreted either as a paraphyletic grade of plesiomorphic forms or a polyphyletic assemblage of reduced forms. The former view suggests that morphological evolution in homobasidiomycetes has been marked by independent elaboration in many clades, whereas the latter view suggests that parallel simplification has been a common mode of evolution. To infer patterns of morphological evolution in homobasidiomycetes, we constructed phylogenetic trees from a dataset of 481 species and performed ancestral state reconstruction (ASR) using parsimony and maximum likelihood (ML) methods. ASR with both parsimony and ML implies that the ancestor of the homobasidiomycetes was resupinate, and that there have been multiple gains and losses of complex forms in the homobasidiomycetes. We also used ML to address whether there is an asymmetry in the rate of transformations between simple and complex forms. Models of morphological evolution inferred with ML indicate that the rate of transformations from simple to complex forms is about three to six times greater than the rate of transformations in the reverse direction. A null model of morphological evolution, in which there is no asymmetry in transformation rates, was rejected. These results suggest that there is a 'driven' trend towards the evolution of complex forms in homobasidiomycetes. PMID:12396494

  18. Mixture Factor Analysis for Approximating a Nonnormally Distributed Continuous Latent Factor with Continuous and Dichotomous Observed Variables

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo

    2012-01-01

    Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…

  19. The Order-Restricted Association Model: Two Estimation Algorithms and Issues in Testing

    ERIC Educational Resources Information Center

    Galindo-Garre, Francisca; Vermunt, Jeroen K.

    2004-01-01

    This paper presents a row-column (RC) association model in which the estimated row and column scores are forced to be in agreement with a priori specified ordering. Two efficient algorithms for finding the order-restricted maximum likelihood (ML) estimates are proposed and their reliability under different degrees of association is investigated by…

  20. Terrain Classification Using Multi-Wavelength Lidar Data

    DTIC Science & Technology

    2015-09-01

    Figure 9. Pseudo- NDVI of three layers within the vertical structure of the forest. (Top) First return from the LiDAR instrument, including the ground...in NDVI throughout the vertical canopy. ........................................................17 Figure 10. Optech Titan operating wavelengths...and Ranging LMS LiDAR Mapping Suite ML Maximum Likelihood NIR Near Infrared N-D VIS n-Dimensional Visualizer NDVI Normalized Difference

  1. Time-resolved speckle effects on the estimation of laser-pulse arrival times

    NASA Technical Reports Server (NTRS)

    Tsai, B.-M.; Gardner, C. S.

    1985-01-01

    A maximum-likelihood (ML) estimator of the pulse arrival in laser ranging and altimetry is derived for the case of a pulse distorted by shot noise and time-resolved speckle. The performance of the estimator is evaluated for pulse reflections from flat diffuse targets and compared with the performance of a suboptimal centroid estimator and a suboptimal Bar-David ML estimator derived under the assumption of no speckle. In the large-signal limit the accuracy of the estimator was found to improve as the width of the receiver observational interval increases. The timing performance of the estimator is expected to be highly sensitive to background noise when the received pulse energy is high and the receiver observational interval is large. Finally, in the speckle-limited regime the ML estimator performs considerably better than the suboptimal estimators.

  2. Reassignment of scattered emission photons in multifocal multiphoton microscopy.

    PubMed

    Cha, Jae Won; Singh, Vijay Raj; Kim, Ki Hean; Subramanian, Jaichandar; Peng, Qiwen; Yu, Hanry; Nedivi, Elly; So, Peter T C

    2014-06-05

    Multifocal multiphoton microscopy (MMM) achieves fast imaging by simultaneously scanning multiple foci across different regions of specimen. The use of imaging detectors in MMM, such as CCD or CMOS, results in degradation of image signal-to-noise-ratio (SNR) due to the scattering of emitted photons. SNR can be partly recovered using multianode photomultiplier tubes (MAPMT). In this design, however, emission photons scattered to neighbor anodes are encoded by the foci scan location resulting in ghost images. The crosstalk between different anodes is currently measured a priori, which is cumbersome as it depends specimen properties. Here, we present the photon reassignment method for MMM, established based on the maximum likelihood (ML) estimation, for quantification of crosstalk between the anodes of MAPMT without a priori measurement. The method provides the reassignment of the photons generated by the ghost images to the original spatial location thus increases the SNR of the final reconstructed image.

  3. What is the best method to fit time-resolved data? A comparison of the residual minimization and the maximum likelihood techniques as applied to experimental time-correlated, single-photon counting data

    DOE PAGES

    Santra, Kalyan; Zhan, Jinchun; Song, Xueyu; ...

    2016-02-10

    The need for measuring fluorescence lifetimes of species in subdiffraction-limited volumes in, for example, stimulated emission depletion (STED) microscopy, entails the dual challenge of probing a small number of fluorophores and fitting the concomitant sparse data set to the appropriate excited-state decay function. This need has stimulated a further investigation into the relative merits of two fitting techniques commonly referred to as “residual minimization” (RM) and “maximum likelihood” (ML). Fluorescence decays of the well-characterized standard, rose bengal in methanol at room temperature (530 ± 10 ps), were acquired in a set of five experiments in which the total number ofmore » “photon counts” was approximately 20, 200, 1000, 3000, and 6000 and there were about 2–200 counts at the maxima of the respective decays. Each set of experiments was repeated 50 times to generate the appropriate statistics. Each of the 250 data sets was analyzed by ML and two different RM methods (differing in the weighting of residuals) using in-house routines and compared with a frequently used commercial RM routine. Convolution with a real instrument response function was always included in the fitting. While RM using Pearson’s weighting of residuals can recover the correct mean result with a total number of counts of 1000 or more, ML distinguishes itself by yielding, in all cases, the same mean lifetime within 2% of the accepted value. For 200 total counts and greater, ML always provides a standard deviation of <10% of the mean lifetime, and even at 20 total counts there is only 20% error in the mean lifetime. Here, the robustness of ML advocates its use for sparse data sets such as those acquired in some subdiffraction-limited microscopies, such as STED, and, more importantly, provides greater motivation for exploiting the time-resolved capacities of this technique to acquire and analyze fluorescence lifetime data.« less

  4. Patient and parent preferences for characteristics of prophylactic treatment in hemophilia

    PubMed Central

    Furlan, Roberto; Krishnan, Sangeeta; Vietri, Jeffrey

    2015-01-01

    Introduction New longer-acting factor products will potentially allow for less frequent infusion in prophylactic treatment of hemophilia. However, the role of administration frequency relative to other treatment attributes in determining preferences for prophylactic hemophilia treatment regimens is not well understood. Aim To identify the relative importance of frequency of administration, efficacy, and other treatment characteristics among candidates for prophylactic treatment for hemophilia A and B. Method An Internet survey was conducted among hemophilia patients and the parents of pediatric hemophilia patients in Australia, Canada, and the US. A monadic conjoint task was included in the survey, which varied frequency of administration (three, two, or one time per week for hemophilia A; twice weekly, weekly, or biweekly for hemophilia B), efficacy (no bleeding or breakthrough bleeding once every 4 months, 6 months, or 12 months), diluent volume (3 mL vs 2.5 mL for hemophilia A; 5 mL vs 3 mL for hemophilia B), vials per infusion (2 vs 1), reconstitution device (assembly required vs not), and manufacturer (established in hemophilia vs not). Respondents were asked their likelihood to switch from their current regimen to the presented treatment. Respondents were told to assume that other aspects of treatment, such as risk of inhibitor development, cost, and method of distribution, would remain the same. Results A total of 89 patients and/or parents of children with hemophilia A participated; another 32 were included in the exercise for hemophilia B. Relative importance was 47%, 24%, and 18% for frequency of administration, efficacy, and manufacturer, respectively, in hemophilia A; analogous values were 48%, 26%, and 21% in hemophilia B. The remaining attributes had little impact on preferences. Conclusion Patients who are candidates for prophylaxis and their caregivers indicate a preference for reduced frequency of administration and high efficacy, but preferences were more sensitive to administration frequency than small changes in annual bleeding rate. PMID:26648701

  5. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  6. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  7. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  8. Fast estimation of diffusion tensors under Rician noise by the EM algorithm.

    PubMed

    Liu, Jia; Gasbarra, Dario; Railavo, Juha

    2016-01-15

    Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Maximum Likelihood Estimation of Spectra Information from Multiple Independent Astrophysics Data Sets

    NASA Technical Reports Server (NTRS)

    Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)

    2002-01-01

    The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.

  10. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate.

    PubMed

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-07-24

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. 'scatter-tails'. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the 'scatter-tails'. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68 Ga-PSMA scan, and 23 whole-body 18 F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical 'halo' artifacts that are often observed in the vicinity of high focal uptake regions.

  11. Enhancing the performance of regional land cover mapping

    NASA Astrophysics Data System (ADS)

    Wu, Weicheng; Zucca, Claudio; Karam, Fadi; Liu, Guangping

    2016-10-01

    Different pixel-based, object-based and subpixel-based methods such as time-series analysis, decision-tree, and different supervised approaches have been proposed to conduct land use/cover classification. However, despite their proven advantages in small dataset tests, their performance is variable and less satisfactory while dealing with large datasets, particularly, for regional-scale mapping with high resolution data due to the complexity and diversity in landscapes and land cover patterns, and the unacceptably long processing time. The objective of this paper is to demonstrate the comparatively highest performance of an operational approach based on integration of multisource information ensuring high mapping accuracy in large areas with acceptable processing time. The information used includes phenologically contrasted multiseasonal and multispectral bands, vegetation index, land surface temperature, and topographic features. The performance of different conventional and machine learning classifiers namely Malahanobis Distance (MD), Maximum Likelihood (ML), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs) and Random Forests (RFs) was compared using the same datasets in the same IDL (Interactive Data Language) environment. An Eastern Mediterranean area with complex landscape and steep climate gradients was selected to test and develop the operational approach. The results showed that SVMs and RFs classifiers produced most accurate mapping at local-scale (up to 96.85% in Overall Accuracy), but were very time-consuming in whole-scene classification (more than five days per scene) whereas ML fulfilled the task rapidly (about 10 min per scene) with satisfying accuracy (94.2-96.4%). Thus, the approach composed of integration of seasonally contrasted multisource data and sampling at subclass level followed by a ML classification is a suitable candidate to become an operational and effective regional land cover mapping method.

  12. Model-based estimation for dynamic cardiac studies using ECT.

    PubMed

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  13. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  14. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  15. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  16. Detecting local diversity-dependence in diversification.

    PubMed

    Xu, Liang; Etienne, Rampal S

    2018-04-06

    Whether there are ecological limits to species diversification is a hotly debated topic. Molecular phylogenies show slowdowns in lineage accumulation, suggesting that speciation rates decline with increasing diversity. A maximum-likelihood (ML) method to detect diversity-dependent (DD) diversification from phylogenetic branching times exists, but it assumes that diversity-dependence is a global phenomenon and therefore ignores that the underlying species interactions are mostly local, and not all species in the phylogeny co-occur locally. Here, we explore whether this ML method based on the nonspatial diversity-dependence model can detect local diversity-dependence, by applying it to phylogenies, simulated with a spatial stochastic model of local DD speciation, extinction, and dispersal between two local communities. We find that type I errors (falsely detecting diversity-dependence) are low, and the power to detect diversity-dependence is high when dispersal rates are not too low. Interestingly, when dispersal is high the power to detect diversity-dependence is even higher than in the nonspatial model. Moreover, estimates of intrinsic speciation rate, extinction rate, and ecological limit strongly depend on dispersal rate. We conclude that the nonspatial DD approach can be used to detect diversity-dependence in clades of species that live in not too disconnected areas, but parameter estimates must be interpreted cautiously. © 2018 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.

  17. Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas

    NASA Astrophysics Data System (ADS)

    Pawłuszek, K.; Borkowski, A.; Tarolli, P.

    2017-05-01

    Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.

  18. Adaptive early detection ML/PDA estimator for LO targets with EO sensors

    NASA Astrophysics Data System (ADS)

    Chummun, Muhammad R.; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov

    2000-07-01

    The batch Maximum Likelihood Estimator, combined with Probabilistic Data (ML-PDA), has been shown to be effective in acquiring low observable (LO) - low SNR - non-maneuvering targets in the presence of heavy clutter. The use of signal strength or amplitude information (AI) in the ML-PDA estimator with AI in a sliding-window fashion, to detect high- speed targets in heavy clutter using electro-optical (EO) sensors. The initial time and the length of the sliding-window are adjusted adaptively according to the information content of the received measurements. A track validation scheme via hypothesis testing is developed to confirm the estimated track, that is, the presence of a target, in each window. The sliding-window ML-PDA approach, together with track validation, enables early detection by rejecting noninformative scans, target reacquisition in case of temporary target disappearance and the handling of targets with speeds evolving over time. The proposed algorithm is shown to detect the target, which is hidden in as many as 600 false alarms per scan, 10 frames earlier than the Multiple Hypothesis Tracking (MHT) algorithm.

  19. One normal void and residual following MUS surgery is all that is necessary in most patients.

    PubMed

    Ballard, Paul; Shawer, Sami; Anderson, Colette; Khunda, Aethele

    2018-04-01

    There is considerable variation worldwide on how the assessment of voiding function is performed following midurethral sling (MUS) surgery. There is potentially a financial cost, and reduction in efficiency when patient discharge is delayed. Using our current practice of two normal void and residual (V&R) readings before discharge, the aim of this retrospective study was to evaluate the likelihood of an abnormal second V&R test if the first V&R test was normal in order to determine if a policy of discharge after only one satisfactory V&R test is reasonable. Data from 400 patients who had had MUS surgery with or without other procedures were collected. Our unit protocol included two consecutive voids of greater than 200 ml with residuals less than 150 ml before discharge. The patients were divided into the following groups: MUS only, MUS plus anterior colporrhaphy (AR) plus any other procedures (MUS/AR), and MUS with any non-AR procedures (MUS+). Complete datasets were available for 335 patients. Once inadequate tests (low volume voids <200 ml) had been excluded (28% overall), the likelihood of an abnormal second V&R test if the first test was normal was 7.1% overall, but 3.6% for MUS, 11.5% for MUS/AR and 8.6% for MUS+. The findings in the MUS-only group indicate that it is probably safe to discharge patients after one satisfactory V&R test, as long as safety measures such as 'open access' are available so that patients have unhindered readmission if problems arise.

  20. Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features

    DTIC Science & Technology

    2013-03-01

    intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or

  1. Pseudo-coherent demodulation for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    This paper proposes three so-called pseudo-coherent demodulation schemes for use in land mobile satellite channels. The schemes are derived based on maximum likelihood (ML) estimation and detection of an N-symbol observation of the received signal. Simulation results for all three demodulators are presented to allow comparison with the performance of differential PSK (DPSK) and ideal coherent demodulation for various system parameter sets of practical interest.

  2. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  3. A Detailed History of Intron-rich Eukaryotic Ancestors Inferred from a Global Survey of 100 Complete Genomes

    PubMed Central

    Csuros, Miklos; Rogozin, Igor B.; Koonin, Eugene V.

    2011-01-01

    Protein-coding genes in eukaryotes are interrupted by introns, but intron densities widely differ between eukaryotic lineages. Vertebrates, some invertebrates and green plants have intron-rich genes, with 6–7 introns per kilobase of coding sequence, whereas most of the other eukaryotes have intron-poor genes. We reconstructed the history of intron gain and loss using a probabilistic Markov model (Markov Chain Monte Carlo, MCMC) on 245 orthologous genes from 99 genomes representing the three of the five supergroups of eukaryotes for which multiple genome sequences are available. Intron-rich ancestors are confidently reconstructed for each major group, with 53 to 74% of the human intron density inferred with 95% confidence for the Last Eukaryotic Common Ancestor (LECA). The results of the MCMC reconstruction are compared with the reconstructions obtained using Maximum Likelihood (ML) and Dollo parsimony methods. An excellent agreement between the MCMC and ML inferences is demonstrated whereas Dollo parsimony introduces a noticeable bias in the estimations, typically yielding lower ancestral intron densities than MCMC and ML. Evolution of eukaryotic genes was dominated by intron loss, with substantial gain only at the bases of several major branches including plants and animals. The highest intron density, 120 to 130% of the human value, is inferred for the last common ancestor of animals. The reconstruction shows that the entire line of descent from LECA to mammals was intron-rich, a state conducive to the evolution of alternative splicing. PMID:21935348

  4. Radiance and atmosphere propagation-based method for the target range estimation

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan

    2012-06-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.

  5. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  6. Dynamical analysis of contrastive divergence learning: Restricted Boltzmann machines with Gaussian visible units.

    PubMed

    Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi

    2016-07-01

    The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Model-based estimation for dynamic cardiac studies using ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.

    1994-06-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less

  8. [Clinical examination and the Valsalva maneuver in heart failure].

    PubMed

    Liniado, Guillermo E; Beck, Martín A; Gimeno, Graciela M; González, Ana L; Cianciulli, Tomás F; Castiello, Gustavo G; Gagliardi, Juan A

    2018-01-01

    Congestion in heart failure patients with reduced ejection fraction (HFrEF) is relevant and closely linked to the clinical course. Bedside blood pressure measurement during the Valsalva maneuver (Val) added to clinical examination may improve the assessment of congestion when compared to NT-proBNP levels and left atrial pressure (LAP) estimation by Doppler echocardiography, as surrogate markers of congestion in HFrEF. A clinical examination, LAP and blood tests were performed in 69 HFrEF ambulatory patients with left ventricular ejection fraction ≤ 40% and sinus rhythm. Framingham Heart Failure Score (HFS) was used to evaluate clinical congestion; Val was classified as normal or abnormal, NT-proBNP was classified as low (< 1000 pg/ml) or high (≥ 1000 pg/ml) and the ratio between Doppler early mitral inflow and tissue diastolic velocity was used to estimate LAP and was classified as low (E/e'< 15) or high (E/e' ≥ 15). A total of 69 patients with HFrEF were included; 27 had a HFS ≥ 2 and 13 of them had high NT-proBNP. HFS ≥ 2 had a 62% sensitivity, 70% specificity and a positive likelihood ratio of 2.08 (p=0.01) to detect congestion. When Val was added to clinical examination, the presence of a HFS ≥ 2 and abnormal Val showed a 100% sensitivity, 64% specificity and a positive likelihood ratio of 2.8 (p = 0.0004). Compared with LAP, the presence of HFS = 2 and abnormal Val had 86% sensitivity, 54% specificity and a positive likelihood ratio of 1.86 (p = 0.03). In conclusion, an integrated clinical examination with the addition Valsalva maneuver may improve the assessment of congestion in patients with HFrEF.

  9. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-08-01

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.

  10. A Unified Classification Framework for FP, DP and CP Data at X-Band in Southern China

    NASA Astrophysics Data System (ADS)

    Xie, Lei; Zhang, Hong; Li, Hhongzhong; Wang, Chao

    2015-04-01

    The main objective of this paper is to introduce an unified framework for crop classification in Southern China using data in fully polarimetric (FP), dual-pol (DP) and compact polarimetric (CP) modes. The TerraSAR-X data acquired over the Leizhou Peninsula, South China are used in our experiments. The study site involves four main crops (rice, banana, sugarcane eucalyptus). Through exploring the similarities between data in these three modes, a knowledge-based characteristic space is created and the unified framework is presented. The overall classification accuracies for data in the FP, coherent HH/VV are about 95%, and is about 91% in CP modes, which suggests that the proposed classification scheme is effective and promising. Compared with the Wishart Maximum Likelihood (ML) classifier, the proposed method exhibits higher classification accuracy.

  11. First report on the occurrence of Theileria sp. OT3 in China.

    PubMed

    Tian, Zhancheng; Liu, Guangyuan; Yin, Hong; Xie, Junren; Wang, Suyan; Yuan, Xiaosong; Wang, Fangfang; Luo, Jin

    2014-04-01

    Theileria sp. OT3 was firstly detected and identified from clinically healthy sheep in Xinjiang Uygur Autonomous Region of China (XUAR) through comparing the complete 18S rDNA gene sequences available in GenBank database and the phylogenetic status based on the internal transcribed spacers (ITS1, ITS2) as well as the intervening 5.8S coding region of the rRNA gene by the methods of a partitioned multi-locus analysis in BEAST and Maximum likelihood analysis in PhyML. Moreover, the findings were confirmed by the species-specific PCR for Theileria sp. OT3 and the prevalence of Theileria sp. OT3 was 14.9% in the north of XUAR. This study is the first report on the occurrence of Theileria sp. OT3 in China. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  13. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    NASA Astrophysics Data System (ADS)

    Fusco, Tilde; Petrella, Angelo; Tanda, Mario

    2009-12-01

    The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  14. ML Frame Synchronization for OFDM Systems Using a Known Pilot and Cyclic Prefixes

    NASA Astrophysics Data System (ADS)

    Huh, Heon

    Orthogonal frequency-division multiplexing (OFDM) is a popular air interface technology that is adopted as a standard modulation scheme for 4G communication systems owing to its excellent spectral efficiency. For OFDM systems, synchronization problems have received much attention along with peak-to-average power ratio (PAPR) reduction. In addition to frequency offset estimation, frame synchronization is a challenging problem that must be solved to achieve optimal system performance. In this paper, we present a maximum likelihood (ML) frame synchronizer for OFDM systems. The synchronizer exploits a synchronization word and cyclic prefixes together to improve the synchronization performance. Numerical results show that the performance of the proposed frame synchronizer is better than that of conventional schemes. The proposed synchronizer can be used as a reference for evaluating the performance of other suboptimal frame synchronizers. We also modify the proposed frame synchronizer to reduce the implementation complexity and propose a near-ML synchronizer for time-varying fading channels.

  15. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  16. Regularization of nonlinear decomposition of spectral x-ray projection images.

    PubMed

    Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise

    2017-09-01

    Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.

  17. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  18. Comparison between Two Endotracheal Tube Cuff Inflation Methods; Just-Seal Vs. Stethoscope-Guided.

    PubMed

    Borhazowal, Rishiraj; Harde, Minal; Bhadade, Rakesh; Dave, Sona; Aswar, Swapnil Ganeshrao

    2017-06-01

    The Endotracheal Tube (ETT) cuff performs a critical function of sealing the airway during positive pressure ventilation. There is a narrow range of cuff pressure required to maintain a functionally safe seal without exceeding capillary blood pressure. We aimed to compare Just-Seal (JS) and Stethoscope-Guided (SG) method of ETT cuff inflation with respect to the volume of air required to inflate the cuff, the manometric cuff pressure achieved and also to assess for the occurrence of postoperative sore throat after extubation in both the groups. It was a prospective observational study done in a Tertiary Teaching Public Hospital over a period of 1½ years on 100 patients with 50 each in two groups; JS or SG method of cuff inflation. SPSS Version 17 was used for data analysis. Statistically significant difference (p-value of less than 0.05) was noted between the two methods based on the volume of air injected into the cuff {the mean volume injected in JS was 6.79 ml and in the SG was 4.95 ml with p=5.71E-16 (< 0.05)} and cuff pressure achieved {mean cuff pressure achieved was 38.80 cm H 2 O in the JS and 29.64 cm H 2 O in SG with p=2.29E-14 (< 0.05)}. The incidence of post extubation sore throat was 54% (27 in 50) in the JS group and only 12% (6 in 50) in the SG; p= 0.00000797. ETT cuff inflation guided by a stethoscope is an effective technique for ensuring appropriate cuff pressures thus accomplishing the objective of providing safe and superior quality care of the patient both during and after anaesthesia and reducing the likelihood of even minimal risk complications that may still have legal implications.

  19. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  20. Robust geostatistical analysis of spatial data

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Künsch, Hans Rudolf; Schwierz, Cornelia; Stahel, Werner A.

    2013-04-01

    Most of the geostatistical software tools rely on non-robust algorithms. This is unfortunate, because outlying observations are rather the rule than the exception, in particular in environmental data sets. Outliers affect the modelling of the large-scale spatial trend, the estimation of the spatial dependence of the residual variation and the predictions by kriging. Identifying outliers manually is cumbersome and requires expertise because one needs parameter estimates to decide which observation is a potential outlier. Moreover, inference after the rejection of some observations is problematic. A better approach is to use robust algorithms that prevent automatically that outlying observations have undue influence. Former studies on robust geostatistics focused on robust estimation of the sample variogram and ordinary kriging without external drift. Furthermore, Richardson and Welsh (1995) proposed a robustified version of (restricted) maximum likelihood ([RE]ML) estimation for the variance components of a linear mixed model, which was later used by Marchant and Lark (2007) for robust REML estimation of the variogram. We propose here a novel method for robust REML estimation of the variogram of a Gaussian random field that is possibly contaminated by independent errors from a long-tailed distribution. It is based on robustification of estimating equations for the Gaussian REML estimation (Welsh and Richardson, 1997). Besides robust estimates of the parameters of the external drift and of the variogram, the method also provides standard errors for the estimated parameters, robustified kriging predictions at both sampled and non-sampled locations and kriging variances. Apart from presenting our modelling framework, we shall present selected simulation results by which we explored the properties of the new method. This will be complemented by an analysis a data set on heavy metal contamination of the soil in the vicinity of a metal smelter. Marchant, B.P. and Lark, R.M. 2007. Robust estimation of the variogram by residual maximum likelihood. Geoderma 140: 62-72. Richardson, A.M. and Welsh, A.H. 1995. Robust restricted maximum likelihood in mixed linear models. Biometrics 51: 1429-1439. Welsh, A.H. and Richardson, A.M. 1997. Approaches to the robust estimation of mixed models. In: Handbook of Statistics Vol. 15, Elsevier, pp. 343-384.

  1. A general diagnostic model applied to language testing data.

    PubMed

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  2. Genetic diversity of Histoplasma and Sporothrix complexes based on sequences of their ITS1-5.8S-ITS2 regions from the BOLD System.

    PubMed

    Estrada-Bárcenas, Daniel Alfonso; Vite-Garín, Tania; Navarro-Barranco, Hortensia; de la Torre-Arciniega, Raúl; Pérez-Mejía, Amelia; Rodríguez-Arellanes, Gabriela; Ramirez, Jose Antonio; Humberto Sahaza, Jorge; Taylor, Maria Lucia; Toriello, Conchita

    2014-01-01

    High sensitivity and specificity of molecular biology techniques have proven usefulness for the detection, identification and typing of different pathogens. The ITS (Internal Transcribed Spacer) regions of the ribosomal DNA are highly conserved non-coding regions, and have been widely used in different studies including the determination of the genetic diversity of human fungal pathogens. This article wants to contribute to the understanding of the intra- and interspecific genetic diversity of isolates of the Histoplasma capsulatum and Sporothrix schenckii species complexes by an analysis of the available sequences of the ITS regions from different sequence databases. ITS1-5.8S-ITS2 sequences of each fungus, either deposited in GenBank, or from our research groups (registered in the Fungi Barcode of Life Database), were analyzed using the maximum likelihood (ML) method. ML analysis of the ITS sequences discriminated isolates from distant geographic origins and particular wild hosts, depending on the fungal species analyzed. This manuscript is part of the series of works presented at the "V International Workshop: Molecular genetic approaches to the study of human pathogenic fungi" (Oaxaca, Mexico, 2012). Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.

  3. A unified framework for group independent component analysis for multi-subject fMRI data

    PubMed Central

    Guo, Ying; Pagnoni, Giuseppe

    2008-01-01

    Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105

  4. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    PubMed Central

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; P<0.001). Improvements in misclassification were greater in patients with impaired kidney function (full multiple imputation + serum creatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  5. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  6. The complete mitochondrial genome structure of the jaguar (Panthera onca).

    PubMed

    Caragiulo, Anthony; Dougherty, Eric; Soto, Sofia; Rabinowitz, Salisa; Amato, George

    2016-01-01

    The jaguar (Panthera onca) is the largest felid in the Western hemisphere, and the only member of the Panthera genus in the New World. The jaguar inhabits most countries within Central and South America, and is considered near threatened by the International Union for the Conservation of Nature. This study represents the first sequence of the entire jaguar mitogenome, which was the only Panthera mitogenome that had not been sequenced. The jaguar mitogenome is 17,049 bases and possesses the same molecular structure as other felid mitogenomes. Bayesian inference (BI) and maximum likelihood (ML) were used to determine the phylogenetic placement of the jaguar within the Panthera genus. Both BI and ML analyses revealed the jaguar to be sister to the tiger/leopard/snow leopard clade.

  7. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  8. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.

  9. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  11. Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David

    2004-05-01

    A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.

  12. Prevalence of Propionibacterium acnes in Intervertebral Discs of Patients Undergoing Lumbar Microdiscectomy: A Prospective Cross-Sectional Study

    PubMed Central

    Capoor, Manu N.; Ruzicka, Filip; Machackova, Tana; Jancalek, Radim; Smrcka, Martin; Schmitz, Jonathan E.; Hermanova, Marketa; Sana, Jiri; Michu, Elleni; Baird, John C.; Ahmed, Fahad S.; Maca, Karel; Lipina, Radim; Alamin, Todd F.; Coscia, Michael F.; Stonemetz, Jerry L.; Witham, Timothy; Ehrlich, Garth D.; Gokaslan, Ziya L.; Mavrommatis, Konstantinos; Birkenmaier, Christof; Fischetti, Vincent A.; Slaby, Ondrej

    2016-01-01

    Background The relationship between intervertebral disc degeneration and chronic infection by Propionibacterium acnes is controversial with contradictory evidence available in the literature. Previous studies investigating these relationships were under-powered and fraught with methodical differences; moreover, they have not taken into consideration P. acnes’ ability to form biofilms or attempted to quantitate the bioburden with regard to determining bacterial counts/genome equivalents as criteria to differentiate true infection from contamination. The aim of this prospective cross-sectional study was to determine the prevalence of P. acnes in patients undergoing lumbar disc microdiscectomy. Methods and Findings The sample consisted of 290 adult patients undergoing lumbar microdiscectomy for symptomatic lumbar disc herniation. An intraoperative biopsy and pre-operative clinical data were taken in all cases. One biopsy fragment was homogenized and used for quantitative anaerobic culture and a second was frozen and used for real-time PCR-based quantification of P. acnes genomes. P. acnes was identified in 115 cases (40%), coagulase-negative staphylococci in 31 cases (11%) and alpha-hemolytic streptococci in 8 cases (3%). P. acnes counts ranged from 100 to 9000 CFU/ml with a median of 400 CFU/ml. The prevalence of intervertebral discs with abundant P. acnes (≥ 1x103 CFU/ml) was 11% (39 cases). There was significant correlation between the bacterial counts obtained by culture and the number of P. acnes genomes detected by real-time PCR (r = 0.4363, p<0.0001). Conclusions In a large series of patients, the prevalence of discs with abundant P. acnes was 11%. We believe, disc tissue homogenization releases P. acnes from the biofilm so that they can then potentially be cultured, reducing the rate of false-negative cultures. Further, quantification study revealing significant bioburden based on both culture and real-time PCR minimize the likelihood that observed findings are due to contamination and supports the hypothesis P. acnes acts as a pathogen in these cases of degenerative disc disease. PMID:27536784

  13. Diagnostic Accuracy of Tests for Polyuria in Lithium-Treated Patients.

    PubMed

    Kinahan, James Conor; NiChorcorain, Aoife; Cunningham, Sean; Freyne, Aideen; Cooney, Colm; Barry, Siobhan; Kelly, Brendan D

    2015-08-01

    In lithium-treated patients, polyuria increases the risk of dehydration and lithium toxicity. If detected early, it is reversible. Despite its prevalence and associated morbidity in clinical practice, it remains underrecognized and therefore undertreated. The 24-hour urine collection is limited by its convenience and practicality. This study explores the diagnostic accuracy of alternative tests such as questionnaires on subjective polyuria, polydipsia, nocturia (dichotomous and ordinal responses), early morning urine sample osmolality (EMUO), and fluid intake record (FIR). This is a cross-sectional study of 179 lithium-treated patients attending a general adult and an old age psychiatry service. Participants completed the tests after completing an accurate 24-hour urine collection. The diagnostic accuracy of the individual tests was explored using the appropriate statistical techniques. Seventy-nine participants completed all of the tests. Polydipsia severity, EMUO, and FIR significantly differentiated the participants with polyuria (area under the receiver operating characteristic curve of 0.646, 0.760, and 0.846, respectively). Of the tests investigated, the FIR made the largest significant change in the probability that a patient experiences polyuria (<2000 mL/24 hours; interval likelihood ratio, 0.18 and >3500 mL/24 hours; interval likelihood ratio, 14). Symptomatic questioning, EMUO, and an FIR could be used in clinical practice to inform the prescriber of the probability that a lithium-treated patient is experiencing polyuria.

  14. Branch length estimation and divergence dating: estimates of error in Bayesian and maximum likelihood frameworks.

    PubMed

    Schwartz, Rachel S; Mueller, Rachel L

    2010-01-11

    Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset size, branch length heterogeneity, branch depth, and analytical framework on branch length estimation across a range of branch lengths. We then reanalyzed an empirical dataset for plethodontid salamanders to determine how inaccurate branch length estimation can affect estimates of divergence dates. The accuracy of branch length estimation varied with branch length, dataset size (both number of taxa and sites), branch length heterogeneity, branch depth, dataset complexity, and analytical framework. For simple phylogenies analyzed in a Bayesian framework, branches were increasingly underestimated as branch length increased; in a maximum likelihood framework, longer branch lengths were somewhat overestimated. Longer datasets improved estimates in both frameworks; however, when the number of taxa was increased, estimation accuracy for deeper branches was less than for tip branches. Increasing the complexity of the dataset produced more misestimated branches in a Bayesian framework; however, in an ML framework, more branches were estimated more accurately. Using ML branch length estimates to re-estimate plethodontid salamander divergence dates generally resulted in an increase in the estimated age of older nodes and a decrease in the estimated age of younger nodes. Branch lengths are misestimated in both statistical frameworks for simulations of simple datasets. However, for complex datasets, length estimates are quite accurate in ML (even for short datasets), whereas few branches are estimated accurately in a Bayesian framework. Our reanalysis of empirical data demonstrates the magnitude of effects of Bayesian branch length misestimation on divergence date estimates. Because the length of branches for empirical datasets can be estimated most reliably in an ML framework when branches are <1 substitution/site and datasets are > or =1 kb, we suggest that divergence date estimates using datasets, branch lengths, and/or analytical techniques that fall outside of these parameters should be interpreted with caution.

  15. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.

  16. Molecular phylogeny of Pompilinae (Hymenoptera: Pompilidae): Evidence for rapid diversification and host shifts in spider wasps.

    PubMed

    Rodriguez, Juanita; Pitts, James P; Florez, Jaime A; Bond, Jason E; von Dohlen, Carol D

    2016-01-01

    Pompilinae is one of the largest subfamilies of spider wasps (Pompilidae). Most pompilines are generalist spider predators at the family level, but some taxa exhibit ecological specificity (i.e., to spider-host guild). Here we present the first molecular phylogenetic analysis of Pompilinae, toward the aim of evaluating the monophyly of tribes and genera. We further test whether changes in the rate of diversification are associated with host-guild shifts. Molecular data were collected from five nuclear loci (28S, EF1-F2, LWRh, Wg, Pol2) for 76 taxa in 39 genera. Data were analyzed using maximum likelihood (ML) and Bayesian inference (BI). The phylogenetic results were compared with previous hypotheses of subfamilial and tribal classification, as well as generic relationships in the subfamily. The classification of Pompilus and Agenioideus is also discussed. A Bayesian relaxed molecular clock analysis was used to examine divergence times. Diversification rate-shift tests accounted for taxon-sampling bias using ML and BI approaches. Ancestral host family and host guild were reconstructed using MP and ML methods. Ancestral host guild for all Pompilinae, for the ancestor at the node where a diversification rate-shift was detected, and two more nodes back in time was inferred using BI. In the resulting phylogenies, Aporini was the only previously proposed monophyletic tribe. Several genera (e.g., Pompilus, Microphadnus and Schistonyx) are also not monophyletic. Dating analyses produced a well-supported chronogram consistent with topologies from BI and ML results. The BI ancestral host-use reconstruction inferred the use of spiders belonging to the guild "other hunters" (frequenting the ground and vegetation) as the ancestral state for Pompilinae. This guild had the highest probability for the ML reconstruction and was equivocal for the MP reconstruction; various switching events to other guilds occurred throughout the evolution of the group. The diversification of Pompilinae shows one main rate-shift coinciding with a shift to ground-hunter spiders, as reconstructed by the BI ancestral character-state analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  18. Placing cryptic, recently extinct, or hypothesized taxa into an ultrametric phylogeny using continuous character data: a case study with the lizard Anolis roosevelti.

    PubMed

    Revell, Liam J; Mahler, D Luke; Reynolds, R Graham; Slater, Graham J

    2015-04-01

    In recent years, enormous effort and investment has been put into assembling the tree of life: a phylogenetic history for all species on Earth. Overwhelmingly, this progress toward building an ever increasingly complete phylogeny of living things has been accomplished through sophisticated analysis of molecular data. In the modern genomic age, molecular genetic data have become very easy and inexpensive to obtain for many species. However, some lineages are poorly represented in or absent from tissue collections, or are unavailable for molecular analysis for other reasons such as restrictive biological sample export laws. Other species went extinct recently and are only available in formalin museum preparations or perhaps even as subfossils. In this brief communication we present a new method for placing cryptic, recently extinct, or hypothesized taxa into an ultrametric phylogeny of extant taxa using continuous character data. This method is based on a relatively simple modification of an established maximum likelihood (ML) method for phylogeny inference from continuous traits. We show that the method works well on simulated trees and data. We then apply it to the case of placing the Culebra Island Giant Anole (Anolis roosevelti) into a phylogeny of Caribbean anoles. Anolis roosevelti is a "crown-giant" ecomorph anole hypothesized to have once been found throughout the Spanish, United States, and British Virgin Islands, but that has not been encountered or collected since the 1930s. Although this species is widely thought to be closely related to the Puerto Rican giant anole, A. cuvieri, our ML method actually places A. roosevelti in a different part of the tree and closely related to a clade of morphologically similar species. We are unable, however, to reject a phylogenetic position for A. roosevelti that places it as sister taxon to A. cuvieri; although close relationship with the remainder of Puerto Rican anole species is strongly rejected by our method. © 2015 The Author(s).

  19. No evidence for the use of DIR, D–D fusions, chromosome 15 open reading frames or VHreplacement in the peripheral repertoire was found on application of an improved algorithm, JointML, to 6329 human immunoglobulin H rearrangements

    PubMed Central

    Ohm-Laursen, Line; Nielsen, Morten; Larsen, Stine R; Barington, Torben

    2006-01-01

    Antibody diversity is created by imprecise joining of the variability (V), diversity (D) and joining (J) gene segments of the heavy and light chain loci. Analysis of rearrangements is complicated by somatic hypermutations and uncertainty concerning the sources of gene segments and the precise way in which they recombine. It has been suggested that D genes with irregular recombination signal sequences (DIR) and chromosome 15 open reading frames (OR15) can replace conventional D genes, that two D genes or inverted D genes may be used and that the repertoire can be further diversified by heavy chain V gene (VH) replacement. Safe conclusions require large, well-defined sequence samples and algorithms minimizing stochastic assignment of segments. Two computer programs were developed for analysis of heavy chain joints. JointHMM is a profile hidden Markow model, while JointML is a maximum-likelihood-based method taking the lengths of the joint and the mutational status of the VH gene into account. The programs were applied to a set of 6329 clonally unrelated rearrangements. A conventional D gene was found in 80% of unmutated sequences and 64% of mutated sequences, while D-gene assignment was kept below 5% in artificial (randomly permutated) rearrangements. No evidence for the use of DIR, OR15, multiple D genes or VH replacements was found, while inverted D genes were used in less than 1‰ of the sequences. JointML was shown to have a higher predictive performance for D-gene assignment in mutated and unmutated sequences than four other publicly available programs. An online version 1·0 of JointML is available at http://www.cbs.dtu.dk/services/VDJsolver. PMID:17005006

  20. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  1. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  2. A Parameter Estimation Scheme for Multiscale Kalman Smoother (MKS) Algorithm Used in Precipitation Data Fusion

    NASA Technical Reports Server (NTRS)

    Wang, Shugong; Liang, Xu

    2013-01-01

    A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.

    In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less

  4. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    NASA Astrophysics Data System (ADS)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.

  5. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  6. Molecular Phylogenetics and Systematics of the Bivalve Family Ostreidae Based on rRNA Sequence-Structure Models and Multilocus Species Tree

    PubMed Central

    Salvi, Daniele; Macali, Armando; Mariottini, Paolo

    2014-01-01

    The bivalve family Ostreidae has a worldwide distribution and includes species of high economic importance. Phylogenetics and systematic of oysters based on morphology have proved difficult because of their high phenotypic plasticity. In this study we explore the phylogenetic information of the DNA sequence and secondary structure of the nuclear, fast-evolving, ITS2 rRNA and the mitochondrial 16S rRNA genes from the Ostreidae and we implemented a multi-locus framework based on four loci for oyster phylogenetics and systematics. Sequence-structure rRNA models aid sequence alignment and improved accuracy and nodal support of phylogenetic trees. In agreement with previous molecular studies, our phylogenetic results indicate that none of the currently recognized subfamilies, Crassostreinae, Ostreinae, and Lophinae, is monophyletic. Single gene trees based on Maximum likelihood (ML) and Bayesian (BA) methods and on sequence-structure ML were congruent with multilocus trees based on a concatenated (ML and BA) and coalescent based (BA) approaches and consistently supported three main clades: (i) Crassostrea, (ii) Saccostrea, and (iii) an Ostreinae-Lophinae lineage. Therefore, the subfamily Crassotreinae (including Crassostrea), Saccostreinae subfam. nov. (including Saccostrea and tentatively Striostrea) and Ostreinae (including Ostreinae and Lophinae taxa) are recognized. Based on phylogenetic and biogeographical evidence the Asian species of Crassostrea from the Pacific Ocean are assigned to Magallana gen. nov., whereas an integrative taxonomic revision is required for the genera Ostrea and Dendostrea. This study pointed out the suitability of the ITS2 marker for DNA barcoding of oyster and the relevance of using sequence-structure rRNA models and features of the ITS2 folding in molecular phylogenetics and taxonomy. The multilocus approach allowed inferring a robust phylogeny of Ostreidae providing a broad molecular perspective on their systematics. PMID:25250663

  7. Molecular phylogenetics and systematics of the bivalve family Ostreidae based on rRNA sequence-structure models and multilocus species tree.

    PubMed

    Salvi, Daniele; Macali, Armando; Mariottini, Paolo

    2014-01-01

    The bivalve family Ostreidae has a worldwide distribution and includes species of high economic importance. Phylogenetics and systematic of oysters based on morphology have proved difficult because of their high phenotypic plasticity. In this study we explore the phylogenetic information of the DNA sequence and secondary structure of the nuclear, fast-evolving, ITS2 rRNA and the mitochondrial 16S rRNA genes from the Ostreidae and we implemented a multi-locus framework based on four loci for oyster phylogenetics and systematics. Sequence-structure rRNA models aid sequence alignment and improved accuracy and nodal support of phylogenetic trees. In agreement with previous molecular studies, our phylogenetic results indicate that none of the currently recognized subfamilies, Crassostreinae, Ostreinae, and Lophinae, is monophyletic. Single gene trees based on Maximum likelihood (ML) and Bayesian (BA) methods and on sequence-structure ML were congruent with multilocus trees based on a concatenated (ML and BA) and coalescent based (BA) approaches and consistently supported three main clades: (i) Crassostrea, (ii) Saccostrea, and (iii) an Ostreinae-Lophinae lineage. Therefore, the subfamily Crassostreinae (including Crassostrea), Saccostreinae subfam. nov. (including Saccostrea and tentatively Striostrea) and Ostreinae (including Ostreinae and Lophinae taxa) are recognized [corrected]. Based on phylogenetic and biogeographical evidence the Asian species of Crassostrea from the Pacific Ocean are assigned to Magallana gen. nov., whereas an integrative taxonomic revision is required for the genera Ostrea and Dendostrea. This study pointed out the suitability of the ITS2 marker for DNA barcoding of oyster and the relevance of using sequence-structure rRNA models and features of the ITS2 folding in molecular phylogenetics and taxonomy. The multilocus approach allowed inferring a robust phylogeny of Ostreidae providing a broad molecular perspective on their systematics.

  8. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  9. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  10. Soft context clustering for F0 modeling in HMM-based speech synthesis

    NASA Astrophysics Data System (ADS)

    Khorram, Soheil; Sameti, Hossein; King, Simon

    2015-12-01

    This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.

  11. Shallow microearthquakes near Chongqing, China triggered by the Rayleigh waves of the 2015 M7.8 Gorkha, Nepal earthquake

    NASA Astrophysics Data System (ADS)

    Han, Libo; Peng, Zhigang; Johnson, Christopher W.; Pollitz, Fred F.; Li, Lu; Wang, Baoshan; Wu, Jing; Li, Qiang; Wei, Hongmei

    2017-12-01

    We present a case of remotely triggered seismicity in Southwest China by the 2015/04/25 M7.8 Gorkha, Nepal earthquake. A local magnitude ML3.8 event occurred near the Qijiang district south of Chongqing city approximately 12 min after the Gorkha mainshock. Within 30 km of this ML3.8 event there are 62 earthquakes since 2009 and only 7 ML > 3 events, which corresponds to a likelihood of 0.3% for a ML > 3 on any given day by a random chance. This observation motivates us to investigate the relationship between the ML3.8 event and the Gorkha mainshock. The ML3.8 event was listed in the China Earthquake National Center (CENC) catalog and occurred at shallow depth (∼3 km). By examining high-frequency waveforms, we identify a smaller local event (∼ML 2.5) ∼ 15 s before the ML3.8 event. Both events occurred during the first two cycles of the Rayleigh waves from the Gorkha mainshock. We perform seismic event detection based on envelope function and waveform matching by using the two events as templates. Both analyses found a statistically significant rate change during the mainshock, suggesting that they were indeed dynamically triggered by the Rayleigh waves. Both events occurred during the peak normal and dilatational stress changes (∼10-30 kPa), consistent with observations of dynamic triggering in other geothermal/volcanic regions. Although other recent events (i.e., the 2011 M9.1 Tohoku-Oki earthquake) produced similar peak ground velocities, the 2015 Gorkha mainshock was the only event that produced clear dynamic triggering in this region. The triggering site is close to hydraulic fracturing wells that began production in 2013-2014. Hence we suspect that fluid injections may increase the region's susceptibility to remote dynamic triggering.

  12. Shallow microearthquakes near Chongqing, China triggered by the Rayleigh waves of the 2015 M7.8 Gorkha, Nepal earthquake

    USGS Publications Warehouse

    Han, Libo; Peng, Zhigang; Johnson, Christopher W.; Pollitz, Fred; Li, Lu; Wang, Baoshan; Wu, Jing; Li, Qiang; Wei, Hongmei

    2017-01-01

    We present a case of remotely triggered seismicity in Southwest China by the 2015/04/25 M7.8 Gorkha, Nepal earthquake. A local magnitude ML3.8 event occurred near the Qijiang district south of Chongqing city approximately 12 min after the Gorkha mainshock. Within 30km of this ML3.8 event there are 62 earthquakes since 2009 and only 7 ML>3events, which corresponds to a likelihood of 0.3% for a ML>3on any given day by a random chance. This observation motivates us to investigate the relationship between the ML3.8 event and the Gorkha mainshock. The ML3.8 event is listed in the China Earthquake National Center (CENC) catalog and occurred at shallow depth (∼3km). By examining high-frequency waveforms, we identify a smaller local event (∼ML2.5) ∼15s before the ML3.8 event. Both events occurred during the first two cycles of the Rayleigh waves from the Gorkha mainshock. We perform seismic event detection based on envelope function and waveform matching by using the two events as templates. Both analyses found a statistically significant rate change during the mainshock, suggesting that they were indeed dynamically triggered by the Rayleigh waves. Both events occurred during the peak normal and dilatational stress changes (∼10–30 kPa), consistent with observations of dynamic triggering in other geothermal/volcanic regions. Although other recent events (i.e., the 2011 M9.1 Tohoku-Oki earthquake) produced similar peak ground velocities, the 2015 Gorkha mainshock was the only event that produced clear dynamic triggering in this region. The triggering site is close to hydraulic fracturing wells that began production in 2013–2014. Hence we suspect that fluid injections may increase the region’s susceptibility to remote dynamic triggering.

  13. A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju; Bhaduri, Budhendra L

    2011-01-01

    Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less

  14. Socioeconomic inequalities in childhood exposure to secondhand smoke before and after smoke-free legislation in three UK countries

    PubMed Central

    Moore, Graham F.; Currie, Dorothy; Gilmore, Gillian; Holliday, Jo C.; Moore, Laurence

    2012-01-01

    Background Secondhand smoke (SHS) exposure is higher among lower socioeconomic status (SES) children. Legislation restricting smoking in public places has been associated with reduced childhood SHS exposure and increased smoke-free homes. This paper examines socioeconomic patterning in these changes. Methods Repeated cross-sectional survey of 10 867 schoolchildren in 304 primary schools in Scotland, Wales and Northern Ireland. Children provided saliva for cotinine assay, completing questionnaires before and 12 months after legislation. Results SHS exposure was highest, and private smoking restrictions least frequently reported, among lower SES children. Proportions of saliva samples containing <0.1 ng/ml (i.e. undetectable) cotinine increased from 31.0 to 41.0%. Although across the whole SES spectrum, there was no evidence of displacement of smoking into the home or increased SHS exposure, socioeconomic inequality in the likelihood of samples containing detectable levels of cotinine increased. Among children from the poorest families, 96.9% of post-legislation samples contained detectable cotinine, compared with 38.2% among the most affluent. Socioeconomic gradients at higher exposure levels remained unchanged. Among children from the poorest families, one in three samples contained >3 ng/ml cotinine. Smoking restrictions in homes and cars increased, although socioeconomic patterning remained. Conclusions Urgent action is needed to reduce inequalities in SHS exposure. Such action should include emphasis on reducing smoking in cars and homes. PMID:22448041

  15. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  16. A method for inferring the rate of evolution of homologous characters that can potentially improve phylogenetic inference, resolve deep divergence and correct systematic biases.

    PubMed

    Cummins, Carla A; McInerney, James O

    2011-12-01

    Current phylogenetic methods attempt to account for evolutionary rate variation across characters in a matrix. This is generally achieved by the use of sophisticated evolutionary models, combined with dense sampling of large numbers of characters. However, systematic biases and superimposed substitutions make this task very difficult. Model adequacy can sometimes be achieved at the cost of adding large numbers of free parameters, with each parameter being optimized according to some criterion, resulting in increased computation times and large variances in the model estimates. In this study, we develop a simple approach that estimates the relative evolutionary rate of each homologous character. The method that we describe uses the similarity between characters as a proxy for evolutionary rate. In this article, we work on the premise that if the character-state distribution of a homologous character is similar to many other characters, then this character is likely to be relatively slowly evolving. If the character-state distribution of a homologous character is not similar to many or any of the rest of the characters in a data set, then it is likely to be the result of rapid evolution. We show that in some test cases, at least, the premise can hold and the inferences are robust. Importantly, the method does not use a "starting tree" to make the inference and therefore is tree independent. We demonstrate that this approach can work as well as a maximum likelihood (ML) approach, though the ML method needs to have a known phylogeny, or at least a very good estimate of that phylogeny. We then demonstrate some uses for this method of analysis, including the improvement in phylogeny reconstruction for both deep-level and recent relationships and overcoming systematic biases such as base composition bias. Furthermore, we compare this approach to two well-established methods for reweighting or removing characters. These other methods are tree-based and we show that they can be systematically biased. We feel this method can be useful for phylogeny reconstruction, understanding evolutionary rate variation, and for understanding selection variation on different characters.

  17. Chemometric-assisted spectrophotometric methods and high performance liquid chromatography for simultaneous determination of seven β-blockers in their pharmaceutical products: A comparative study

    NASA Astrophysics Data System (ADS)

    Abdel Hameed, Eman A.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2015-04-01

    Chemometric-assisted spectrophotometric methods and high performance liquid chromatography (HPLC) were developed for the simultaneous determination of the seven most commonly prescribed β-blockers (atenolol, sotalol, metoprolol, bisoprolol, propranolol, carvedilol and nebivolol). Principal component regression PCR, partial least square PLS and PLS with previous wavelength selection by genetic algorithm (GA-PLS) were used for chemometric analysis of spectral data of these drugs. The compositions of the mixtures used in the calibration set were varied to cover the linearity ranges 0.7-10 μg ml-1 for AT, 1-15 μg ml-1 for ST, 1-15 μg ml-1 for MT, 0.3-5 μg ml-1 for BS, 0.1-3 μg ml-1 for PR, 0.1-3 μg ml-1 for CV and 0.7-5 μg ml-1 for NB. The analytical performances of these chemometric methods were characterized by relative prediction errors and were compared with each other. GA-PLS showed superiority over the other applied multivariate methods due to the wavelength selection. A new gradient HPLC method had been developed using statistical experimental design. Optimum conditions of separation were determined with the aid of central composite design. The developed HPLC method was found to be linear in the range of 0.2-20 μg ml-1 for AT, 0.2-20 μg ml-1 for ST, 0.1-15 μg ml-1 for MT, 0.1-15 μg ml-1 for BS, 0.1-13 μg ml-1 for PR, 0.1-13 μg ml-1 for CV and 0.4-20 μg ml-1 for NB. No significant difference between the results of the proposed GA-PLS and HPLC methods with respect to accuracy and precision. The proposed analytical methods did not show any interference of the excipients when applied to pharmaceutical products.

  18. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  19. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  20. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  1. Diagnostic value of survivin for malignant pleural effusion: a clinical study and meta-analysis.

    PubMed

    Tian, Panwen; Shen, Yongchun; Wan, Chun; Yang, Ting; An, Jing; Yi, Qun; Chen, Lei; Wang, Tao; Wang, Ye; Wen, Fuqiang

    2014-01-01

    To investigate the diagnostic accuracy of survivin for malignant pleural effusion (MPE). Pleural effusion samples were collected from 40 MPE patients and 45 non-MPE patients. Pleural levels of survivin were measured by ELISA. Literature search was performed in Pubmed and Embase to identify studies regarding the usefulness of survivin to diagnose MPE. Data were retrieved and the pooled sensitivity, specificity and other diagnostic indexes were calculated. The summary receiver operating characteristics (SROC) curve was used to determine the overall diagnostic accuracy. The pleural levels of survivin were higher in MPE patients than non-MPE patients (844.17 ± 358.30 vs. 508.08 ± 169.58 pg/ml, P < 0.05), at a cut-off value of 683.2 pg/ml, the sensitivity and specificity were 57.50% and 88.89%, respectively. A total of six studies were included in present meta-analysis, the overall diagnostic estimates were: sensitivity 0.74 (95% CI: 0.59-0.85); specificity, 0.85 (95% CI: 0.79-0.89); positive likelihood ratio, 4.79 (95% CI: 3.48-6.61); negative likelihood ratio, 0.31 (95% CI: 0.19-0.50), and diagnostic odds ratio, 15.59 (95% CI: 7.69-31.61). The area under SROC curve was 0.86 (95% CI: 0.82-0.89). Our study confirms that the pleural survivin plays a role in the diagnosis of MPE. More studies at a large scale should be performed to validate our findings.

  2. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  3. Experimental investigation of extended Kalman Filter combined with carrier phase recovery for 16-QAM system

    NASA Astrophysics Data System (ADS)

    Shu, Tong; Li, Yan; Yu, Miao; Zhang, Yifan; Zhou, Honghang; Qiu, Jifang; Guo, Hongxiang; Hong, Xiaobin; Wu, Jian

    2018-02-01

    Performance of Extended Kalman Filter combined with the Viterbi-Viterbi phase estimation (VVPE-EKF) for joint phase noise mitigation and amplitude noise equalization is experimental demonstrated. Experimental results show that, for 11.2 Gbaud SP-16-QAM, the proposed VVPE-EKF achieves 0.9 dB required OSNR reduction at bit error ratio (BER) of 3.8e-3 compared to the VVPE. The result of maximum likelihood combined with VVPE (VVPE-ML) is only 0.3 dB. For 28 GBaud SP-16-QAM signal, VVPE-EKF achieves 3 dB required OSNR reduction at BER=3.8e-3 (7% HD-FEC threshold) compared to VVPE. And VVPE-ML can reduce the required OSNR for 1.7 dB compared to the VVPE. VVPE-EKF outperforms DD-EKF 3.7 dB and 0.7 dB for 11.2 GBaud and 28 GBaud system, respectively.

  4. Five-year lung function observations and associations with a smoking ban among healthy miners at high altitude (4000 m).

    PubMed

    Vinnikov, Denis; Blanc, Paul D; Brimkulov, Nurlan; Redding-Jones, Rupert

    2013-12-01

    To assess the annual lung function decline associated with the reduction of secondhand smoke exposure in a high-altitude industrial workforce. We performed pulmonary function tests annually among 109 high-altitude gold-mine workers over 5 years of follow-up. The first 3 years included greater likelihood of exposure to secondhand smoke exposure before the initiation of extensive smoking restrictions that came into force in the last 2 years of observation. In repeated measures modeling, taking into account the time elapsed in relation to the smoking ban, there was a 115 ± 9 (standard error) mL per annum decline in lung function before the ban, but a 178 ± 20 (standard error) mL per annum increase afterward (P < 0.001, both slopes). Institution of a workplace smoking ban at high altitude may be beneficial in terms of lung function decline.

  5. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  6. An evaluation of portion size estimation aids: precision, ease of use and likelihood of future use.

    PubMed

    Faulkner, Gemma P; Livingstone, M Barbara E; Pourshahidi, L Kirsty; Spence, Michelle; Dean, Moira; O'Brien, Sinead; Gibney, Eileen R; Wallace, Julie Mw; McCaffrey, Tracy A; Kerr, Maeve A

    2016-09-01

    The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA). A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage. For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA. Males and females aged 18-64 years (n 120). The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods. Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.

  7. Exploring the Influence of Topographic Correction and SWIR Spectral Information Inclusion on Burnt Scars Detection From High Resolution EO Imagery: A Case Study Using ASTER imagery

    NASA Astrophysics Data System (ADS)

    Said, Yahia A.; Petropoulos, George; Srivastava, Prashant K.

    2014-05-01

    Information on burned area estimates is of key importance in environmental and ecological studies as well as in fire management including damage assessment and planning of post-fire recovery of affected areas. Earth Observation (EO) provides today the most efficient way in obtaining such information in a rapid, consistent and cost-effective manner. The present study aimed at exploring the effect of topographic correction to the burnt area delineation in conditions characteristic of a Mediterranean environment using ASTER high resolution multispectral remotely sensed imagery. A further objective was to investigate the potential added-value of the inclusion of the shortwave infrared (SWIR) bands in improving the retrievals of burned area cartography from the ASTER data. In particular the capability of the Maximum Likelihood (ML), the Support Vector Machines (SVMs) and Object-based Image Analysis (OBIA) classification techniques has been examined herein for the purposes of our study. As a case study is used a typical Mediterranean site on which a fire event occurred in Greece during the summer of 2007, for which post-fire ASTER imagery has been acquired. Our results indicated that the combination of topographic correction (ortho-rectification) with the inclusion of the SWIR bands returned the most accurate results in terms of burnt area mapping. In terms of image processing methods, OBIA showed the best results and found as the most promising approach for burned area mapping with least absolute difference from the validation polygon followed by SVM and ML. All in all, our study provides an important contribution to the understanding of the capability of high resolution imagery such as that from ASTER sensor and corroborates the usefulness particularly of the topographic correction as an image processing step when in delineating the burnt areas from such data. It also provides further evidence that use of EO technology can offer an effective practical tool for the extent of ecosystem destruction from wildfires, providing extremely useful information in co-ordinating efforts for the recovery of fire-affected ecosystems after wildfire. Keywords: Remote Sensing, ASTER, Burned area mapping, Maximum Likelihood, Support Vector Machines, Object-based image analysis, Greece

  8. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305

  9. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Fluorescein angiography versus optical coherence tomography for diagnosis of uveitic macular edema.

    PubMed

    Kempen, John H; Sugar, Elizabeth A; Jaffe, Glenn J; Acharya, Nisha R; Dunn, James P; Elner, Susan G; Lightman, Susan L; Thorne, Jennifer E; Vitale, Albert T; Altaweel, Michael M

    2013-09-01

    To evaluate agreement between fluorescein angiography (FA) and optical coherence tomography (OCT) results for diagnosis of macular edema in patients with uveitis. Multicenter cross-sectional study. Four hundred seventy-nine eyes with uveitis from 255 patients. The macular status of dilated eyes with intermediate uveitis, posterior uveitis, or panuveitis was assessed via Stratus-3 OCT and FA. To evaluate agreement between the diagnostic approaches, κ statistics were used. Macular thickening (MT; center point thickness, ≥ 240 μm per reading center grading of OCT images) and macular leakage (ML; central subfield fluorescein leakage, ≥ 0.44 disc areas per reading center grading of FA images), and agreement between these outcomes in diagnosing macular edema. Optical coherence tomography (90.4%) more frequently returned usable information regarding macular edema than FA (77%) or biomicroscopy (76%). Agreement in diagnosis of MT and ML (κ = 0.44) was moderate. Macular leakage was present in 40% of cases free of MT, whereas MT was present in 34% of cases without ML. Biomicroscopic evaluation for macular edema failed to detect 40% and 45% of cases of MT and ML, respectively, and diagnosed 17% and 17% of cases with macular edema that did not have MT or ML, respectively; these results may underestimate biomicroscopic errors (ophthalmologists were not explicitly masked to OCT and FA results). Among eyes free of ML, phakic eyes without cataract rarely (4%) had MT. No factors were found that effectively ruled out ML when MT was absent. Optical coherence tomography and FA offered only moderate agreement regarding macular edema status in uveitis cases, probably because what they measure (MT and ML) are related but nonidentical macular pathologic characteristics. Given its lower cost, greater safety, and greater likelihood of obtaining usable information, OCT may be the best initial test for evaluation of suspected macular edema. However, given that ML cannot be ruled out if MT is absent and vice versa, obtaining the second test after negative results on the first seems justified when detection of ML or MT would alter management. Given that biomicroscopic evaluation for macular edema erred frequently, ancillary testing for macular edema seems indicated when knowledge of ML or MT status would affect management. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  11. Comparative study of three modified numerical spectrophotometric methods: An application on pharmaceutical ternary mixture of aspirin, atorvastatin and clopedogrel

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud Mohamed; Nejem, R.'afat Mahmoud; Shanab, Alaa Abu; Hegazy, Nahed Diab; Stefan-van Staden, Raluca-Ioana

    2014-07-01

    Three novel numerical methods were developed for the spectrophotometric multi-component analysis of capsules and synthetic mixtures of aspirin, atorvastatin and clopedogrel without any chemical separation. The subtraction method is based on the relationship between the difference in absorbance at four wavelengths and corresponding concentration of analyte. In this method, the linear determination ranges were 0.8-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 0.5-30 μg mL-1 clopedogrel. In the quotient method, 0.8-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 1.0-30 μg mL-1 clopedogrel were determine from spectral data at the wavelength pairs that show the same ratio of absorbance for other two species. Standard addition method was used for resolving ternary mixture of 1.0-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 2.0-30 μg mL-1 clopedogrel. The proposed methods were validated. The reproducibility and repeatability were found satisfactory which evidence was by low values of relative standard deviation (<2%). Recovery was found to be in the range (99.6-100.8%). By adopting these methods, the time taken for analysis was reduced as these methods involve very limited steps. The developed methods were applied for simultaneous analysis of aspirin, atorvastatin and clopedogrel in capsule dosage forms and results were in good concordance with alternative liquid chromatography.

  12. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  13. Distributed Stress Sensing and Non-Destructive Tests Using Mechanoluminescence Materials

    NASA Astrophysics Data System (ADS)

    Rahimi, Mohammad Reza

    Rapid aging of infrastructure systems is currently pervasive in the US and the anticipated cost until 2020 for rehabilitation of aging lifeline will reach 3.6 trillion US dollars (ASCE 2013). Reliable condition or serviceability assessment is critically important in decision-making for economic and timely maintenance of the infrastructure systems. Advanced sensors and nondestructive test (NDT) methods are the key technologies for structural health monitoring (SHM) applications that can provide information on the current state of structures. There are many traditional sensors and NDT methods, for examples, strain gauges, ultrasound, radiography and other X-ray, etc. to detect any defect on the infrastructure. Considering that civil infrastructure is typically large-scale and exhibits complex behavior, estimation of structural conditions by the local sensing and NDT methods is a challenging task. Non-contact and distributed (or full-field) sensing and NDT method are desirable that can provide rich information on the civil infrastructure's state. Materials with the ability of emitting light, especially in the visible range, are named as luminescent materials. Mechanoluminescence (ML) phenomenon is the light emission from luminescent materials as a response of an induced mechanical stress. ML materials offer new opportunities for SHM that can directly visualize the stress and crack distributions on the surface of structures through ML light emission. Although material research for ML phenomena have been made substantially, applications of the ML sensors to full-field stress and crack visualization are still at infant stage and have yet to be full-fledged. Moreover, practical applications of the ML sensors for SHM of civil infrastructure have difficulties since numerous challenging problems (e.g. environmental effect) arise in actual applications. In order to realize a practical SHM system employing ML sensors, more research needs to be conducted, for examples, fundamental understandings of physics of ML phenomenon, method for quantitative stress measurements, calibration method for ML sensors, improvement of sensitivity, optimal manufacturing and design of ML sensors, environmental effects of ML phenomenon (e.g. temperature), image processing and analysis, etc. In this research, fundamental ML phenomena of two most promising ML sensing materials were experimentally studied and a methodology for full-field quantitative strain measurements, for the first time, was proposed along with a standardized calibration method. Characteristics and behavior of ML composites and thin films coated on the structure have been studied under various material tests including compression, tension, pure shear, bending, etc. In addition, ML emission sensitivity to the manufacturing parameters and experimental conditions was addressed in order to find optimal design the ML sensor. A phenomenological stress-optics transduction model for predicting the ML light intensity from a thin-film ML coating sensor subjected to in-plane stresses was proposed. A new full-field quantitative strain measuring methodology by ML thin film sensor was developed, for the first time, in order to visualize and measure the strain field. The results from the ML sensor were compared and verified by finite element simulation results. For NDT applications of ML sensors, experimental tests were conducted to visualize the cracks on structural surfaces and detect damages on structural components. In summary, this research proposes and realizes a new distributed stress sensor and NDT method using ML sensing materials. The proposed method is experimentally validated to be effective for stress measurement and crack visualizations. Successful completion of this research provides a leap toward a commercial light intensity-based optic sensor to be used as a new full-field stress measurement technology and NDT method.

  14. 14-3-3η Autoantibodies: Diagnostic Use in Early Rheumatoid Arthritis.

    PubMed

    Maksymowych, Walter P; Boire, Gilles; van Schaardenburg, Dirkjan; Wichuk, Stephanie; Turk, Samina; Boers, Maarten; Siminovitch, Katherine A; Bykerk, Vivian; Keystone, Ed; Tak, Paul Peter; van Kuijk, Arno W; Landewé, Robert; van der Heijde, Desiree; Murphy, Mairead; Marotta, Anthony

    2015-09-01

    To describe the expression and diagnostic use of 14-3-3η autoantibodies in early rheumatoid arthritis (RA). 14-3-3η autoantibody levels were measured using an electrochemiluminescent multiplexed assay in 500 subjects (114 disease-modifying antirheumatic drug-naive patients with early RA, 135 with established RA, 55 healthy, 70 autoimmune, and 126 other non-RA arthropathy controls). 14-3-3η protein levels were determined in an earlier analysis. Two-tailed Student t tests and Mann-Whitney U tests compared differences among groups. Receiver-operator characteristic (ROC) curves were generated and diagnostic performance was estimated by area under the curve (AUC), as well as specificity, sensitivity, and likelihood ratios (LR) for optimal cutoffs. Median serum 14-3-3η autoantibody concentrations were significantly higher (p < 0.0001) in patients with early RA (525 U/ml) when compared with healthy controls (235 U/ml), disease controls (274 U/ml), autoimmune disease controls (274 U/ml), patients with osteoarthritis (259 U/ml), and all controls (265 U/ml). ROC curve analysis comparing early RA with healthy controls demonstrated a significant (p < 0.0001) AUC of 0.90 (95% CI 0.85-0.95). At an optimal cutoff of ≥ 380 U/ml, the ROC curve yielded a sensitivity of 73%, a specificity of 91%, and a positive LR of 8.0. Adding 14-3-3η autoantibodies to 14-3-3η protein positivity enhanced the identification of patients with early RA from 59% to 90%; addition of 14-3-3η autoantibodies to anticitrullinated protein antibodies (ACPA) and/or rheumatoid factor (RF) increased identification from 72% to 92%. Seventy-two percent of RF- and ACPA-seronegative patients were positive for 14-3-3η autoantibodies. 14-3-3η autoantibodies, alone and in combination with the 14-3-3η protein, RF, and/or ACPA identified most patients with early RA.

  15. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  16. What influences the choice of assessment methods in health technology assessments? Statistical analysis of international health technology assessments from 1989 to 2002.

    PubMed

    Draborg, Eva; Andersen, Christian Kronborg

    2006-01-01

    Health technology assessment (HTA) has been used as input in decision making worldwide for more than 25 years. However, no uniform definition of HTA or agreement on assessment methods exists, leaving open the question of what influences the choice of assessment methods in HTAs. The objective of this study is to analyze statistically a possible relationship between methods of assessment used in practical HTAs, type of assessed technology, type of assessors, and year of publication. A sample of 433 HTAs published by eleven leading institutions or agencies in nine countries was reviewed and analyzed by multiple logistic regression. The study shows that outsourcing of HTA reports to external partners is associated with a higher likelihood of using assessment methods, such as meta-analysis, surveys, economic evaluations, and randomized controlled trials; and with a lower likelihood of using assessment methods, such as literature reviews and "other methods". The year of publication was statistically related to the inclusion of economic evaluations and shows a decreasing likelihood during the year span. The type of assessed technology was related to economic evaluations with a decreasing likelihood, to surveys, and to "other methods" with a decreasing likelihood when pharmaceuticals were the assessed type of technology. During the period from 1989 to 2002, no major developments in assessment methods used in practical HTAs were shown statistically in a sample of 433 HTAs worldwide. Outsourcing to external assessors has a statistically significant influence on choice of assessment methods.

  17. Identifying a parsimonious model for predicting academic achievement in undergraduate medical education: A confirmatory factor analysis

    PubMed Central

    Ali, Syeda Kauser; Baig, Lubna Ansari; Violato, Claudio; Zahid, Onaiza

    2017-01-01

    Objectives: This study was conducted to adduce evidence of validity for admissions tests and processes and for identifying a parsimonious model that predicts students’ academic achievement in Medical College. Methods: Psychometric study done on admission data and assessment scores for five years of medical studies at Aga Khan University Medical College, Pakistan using confirmatory factor analysis (CFA) and structured equation modeling (SEM). Sample included 276 medical students admitted in 2003, 2004 and 2005. Results: The SEM supported the existence of covariance between verbal reasoning, science and clinical knowledge for predicting achievement in medical school employing Maximum Likelihood (ML) estimations (n=112). Fit indices: χ2 (21) = 59.70, p =<.0001; CFI=.873; RMSEA = 0.129; SRMR = 0.093. Conclusions: This study shows that in addition to biology and chemistry which have been traditionally used as major criteria for admission to medical colleges in Pakistan; mathematics has proven to be a better predictor for higher achievements in medical college. PMID:29067063

  18. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  19. DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae)

    PubMed Central

    Jordaens, Kurt; Goergen, Georg; Virgilio, Massimiliano; Backeljau, Thierry; Vokaer, Audrey; De Meyer, Marc

    2015-01-01

    The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known. PMID:26473612

  20. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-07

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  1. Time-of-flight PET image reconstruction using origin ensembles

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  2. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  4. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.

    PubMed

    Karabatsos, George

    2018-06-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.

  5. [Approach to Spodoptera (Lepidoptera: Noctuidae) phylogeny based on the sequence of the cytocrhome oxydase I (COI) mitochondrial gene].

    PubMed

    Saldamando, Clara Inés; Marquez, Edna Judith

    2012-09-01

    The genus Spodoptera includes 30 species of moths considered important pests worldwide, with a great representation in the Western Hemisphere. In general, Noctuidae species have morphological similarities that have caused some difficulties for assertive species identification by conventional methods. The purpose of this work was to generate an approach to the genus phylogeny from several species of the genus Spodoptera and the species Bombyx mori as an out group, with the use of molecular tools. For this, a total of 102 S. frugiperda larvae were obtained at random in corn, cotton, rice, grass and sorghum, during late 2006 and early 2009, from Colombia. We took ADN samples from the larval posterior part and we analyzed a fragment of 451 base pairs of the mitochondrial gene cytochrome oxydase I (COI), to produce a maximum likelihood (ML) tree by using 62 sequences (29 Colombian haplotypes were used). Our results showed a great genetic differentiation (K2 distances) amongst S. frugiperda haplotypes from Colombia and the United States, condition supported by the estimators obtained for haplotype diversity and polymorphism. The obtained ML tree clustered most of the species with bootstrapping values from 73-99% in the interior branches; with low values also observed in some of the branches. In addition, this tree clustered two species of the Eastern hemisphere (S littoralis and S. litura) and eight species of the Western hemisphere (S. androgea, S. dolichos, S. eridania, S. exigua, S. frugiperda, S. latifascia, S. ornithogalli and S. pulchella). In Colombia, S. frugiperda, S. ornithogalli and S. albula represent a group of species referred as "the Spodoptera complex" of cotton crops, and our work demonstrated that sequencing a fragment of the COI gene, allows researchers to differentiate the first two species, and thus it can be used as an alternative method to taxonomic keys based on morphology. Finally, the ML tree did not cluster S. frugiperda with S. ornithogalli, suggesting that both species do not share the same recent ancestral even though they coexist in cotton. We suggest sequencing other genes (mitochondrial and nuclear) to increase our understanding of this genus evolution.

  6. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  7. Resonance scattering spectra of micrococcus lysodeikticus and its application to assay of lysozyme activity.

    PubMed

    Jiang, Zhi-Liang; Huang, Guo-Xia

    2007-02-01

    Several methods, including turbidimetric and colorimetric methods, have been reported for the detection of lysozyme activity. However, there is no report about the resonance scattering spectral (RSS) assay, which is based on the catalytic effect of lysozyme on the hydrolysis of micrococcus lysodeikticus (ML) and its resonance scattering effect. ML has 5 resonance scattering peaks at 360 400, 420, 470, and 520 nm with the strongest one at 470 nm. The concentration of ML in the range of 2.0x10(6)-9.3x10(8) cells/ml is proportional to the RS intensity at 470 nm (I(470 nm)). A new catalytic RSS method has been proposed for 0.24-40.0 U/ml (or 0.012-2.0 mug/ml) lysozyme activity, with a detection limit (3sigma) of 0.014 U/ml (or 0.0007 microg/ml). Saliva samples were assayed by this method, and it is in agreement with the results of turbidimetric method. The slope, intercept and the correlation coefficient of the regression analysis of the 2 assays were 0.9665, -87.50, and 0.9973, respectively. The assay has high sensitivity and simplicity.

  8. One tree to link them all: a phylogenetic dataset for the European tetrapoda.

    PubMed

    Roquet, Cristina; Lavergne, Sébastien; Thuiller, Wilfried

    2014-08-08

    Since the ever-increasing availability of phylogenetic informative data, the last decade has seen an upsurge of ecological studies incorporating information on evolutionary relationships among species. However, detailed species-level phylogenies are still lacking for many large groups and regions, which are necessary for comprehensive large-scale eco-phylogenetic analyses. Here, we provide a dataset of 100 dated phylogenetic trees for all European tetrapods based on a mixture of supermatrix and supertree approaches. Phylogenetic inference was performed separately for each of the main Tetrapoda groups of Europe except mammals (i.e. amphibians, birds, squamates and turtles) by means of maximum likelihood (ML) analyses of supermatrix applying a tree constraint at the family (amphibians and squamates) or order (birds and turtles) levels based on consensus knowledge. For each group, we inferred 100 ML trees to be able to provide a phylogenetic dataset that accounts for phylogenetic uncertainty, and assessed node support with bootstrap analyses. Each tree was dated using penalized-likelihood and fossil calibration. The trees obtained were well-supported by existing knowledge and previous phylogenetic studies. For mammals, we modified the most complete supertree dataset available on the literature to include a recent update of the Carnivora clade. As a final step, we merged the phylogenetic trees of all groups to obtain a set of 100 phylogenetic trees for all European Tetrapoda species for which data was available (91%). We provide this phylogenetic dataset (100 chronograms) for the purpose of comparative analyses, macro-ecological or community ecology studies aiming to incorporate phylogenetic information while accounting for phylogenetic uncertainty.

  9. Sedative load and salivary secretion and xerostomia in community-dwelling older people.

    PubMed

    Tiisanoja, Antti; Syrjälä, Anna-Maija; Komulainen, Kaija; Hartikainen, Sirpa; Taipale, Heidi; Knuuttila, Matti; Ylöstalo, Pekka

    2016-06-01

    The aim was to investigate how sedative load and the total number of drugs used are related to hyposalivation and xerostomia among 75-year-old or older dentate, non-smoking, community-dwelling people. The study population consisted of 152 older people from the Oral Health GeMS study. The data were collected by interviews and clinical examinations during 2004-2005. Sedative load, which measures the cumulative effect of taking multiple drugs with sedative properties, was calculated using the Sedative Load Model. The results showed that participants with a sedative load of either 1-2 or ≥3 had an increased likelihood of having low stimulated salivary flow (<0.7 ml/min; OR: 2.4; CI: 0.6-8.6 and OR: 11; CI: 2.2-59; respectively) and low unstimulated salivary flow (<0.1 ml/min; OR: 2.7, CI: 1.0-7.4 and OR: 4.5, CI: 1.0-20, respectively) compared with participants without a sedative load. Participants with a sedative load ≥3 had an increased likelihood of having xerostomia (OR: 2.5, CI: 0.5-12) compared with participants without a sedative load. The results showed that the association between the total number of drugs and hyposalivation was weaker than the association between sedative load and hyposalivation. Sedative load is strongly related to hyposalivation and to a lesser extent with xerostomia. The adverse effects of drugs on saliva secretion are specifically related to drugs with sedative properties. © 2014 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.

  10. A comparison of three approaches to non-stationary flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.

    2017-08-01

    Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".

  11. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  12. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  13. Systematic implementation of spectral CT with a photon counting detector for liquid security inspection

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Xing, Yuxiang; Wang, Sen; Zhang, Li

    2018-06-01

    X-ray liquid security inspection system plays an important role in homeland security, while the conventional dual-energy CT (DECT) system may have a big deviation in extracting the atomic number and the electron density of materials in various conditions. Photon counting detectors (PCDs) have the capability of discriminating the incident photons of different energy. The technique becomes more and more mature in nowadays. In this work, we explore the performance of a multi-energy CT imaging system with a PCD for liquid security inspection in material discrimination. We used a maximum-likelihood (ML) decomposition method with scatter correction based on a cross-energy response model (CERM) for PCDs so that to improve the accuracy of atomic number and electronic density imaging. Experimental study was carried to examine the effectiveness and robustness of the proposed system. Our results show that the concentration of different solutions in physical phantoms can be reconstructed accurately, which could improve the material identification compared to current available dual-energy liquid security inspection systems. The CERM-base decomposition and reconstruction method can be easily used to different applications such as medical diagnosis.

  14. Indocyanine Green Guided Pelvic Lymph Node Dissection: An Efficient Technique to Classify the Lymph Node Status of Patients with Prostate Cancer Who Underwent Radical Prostatectomy.

    PubMed

    Ramírez-Backhaus, Miguel; Mira Moreno, Alejandra; Gómez Ferrer, Alvaro; Calatrava Fons, Ana; Casanova, Juan; Solsona Narbón, Eduardo; Ortiz Rodríguez, Isabel María; Rubio Briones, José

    2016-11-01

    We evaluated the effectiveness of indocyanine green guided pelvic lymph node dissection for the optimal staging of prostate cancer and analyzed whether the technique could replace extended pelvic lymph node dissection. A solution of 25 mg indocyanine green in 5 ml sterile water was transperineally injected. Pelvic lymph node dissection was started with the indocyanine green stained nodes followed by extended pelvic lymph node dissection. Primary outcome measures were sensitivity, specificity, predictive value and likelihood ratio of a negative test of indocyanine green guided pelvic lymph node dissection. A total of 84 patients with a median age of 63.55 years and a median prostate specific antigen of 8.48 ng/ml were included in the study. Of these patients 60.7% had intermediate risk disease and 25% had high or very high risk disease. A median of 7 indocyanine green stained nodes per patient was detected (range 2 to 18) with a median of 22 nodes excised during extended pelvic lymph node dissection. Lymph node metastasis was identified in 25 patients, 23 of whom had disease properly classified by indocyanine green guided pelvic lymph node dissection. The most frequent location of indocyanine green stained nodes was the proximal internal iliac artery followed by the fossa of Marcille. The negative predictive value was 96.7% and the likelihood ratio of a negative test was 8%. Overall 1,856 nodes were removed and 603 were stained indocyanine green. Pathological examination revealed 82 metastatic nodes, of which 60% were indocyanine green stained. The negative predictive value was 97.4% but the likelihood ratio of a negative test was 58.5%. Indocyanine green guided pelvic lymph node dissection correctly staged 97% of cases. However, according to our data it cannot replace extended pelvic lymph node dissection. Nevertheless, its high negative predictive value could allow us to avoid extended pelvic lymph node dissection if we had an accurate intraoperative lymph fluorescent analysis. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  15. Bivariate versus multivariate smart spectrophotometric calibration methods for the simultaneous determination of a quaternary mixture of mosapride, pantoprazole and their degradation products.

    PubMed

    Hegazy, M A; Yehia, A M; Moustafa, A A

    2013-05-01

    The ability of bivariate and multivariate spectrophotometric methods was demonstrated in the resolution of a quaternary mixture of mosapride, pantoprazole and their degradation products. The bivariate calibrations include bivariate spectrophotometric method (BSM) and H-point standard addition method (HPSAM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products, the results showed that simultaneous determinations could be performed in the concentration ranges of 5.0-50.0 microg/ml for mosapride and 10.0-40.0 microg/ml for pantoprazole by bivariate spectrophotometric method and in the concentration ranges of 5.0-45.0 microg/ml for both drugs by H-point standard addition method. Moreover, the applied multivariate calibration methods were able for the determination of mosapride, pantoprazole and their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 17 synthetic samples in the concentration ranges of 3.0-12.0 microg/ml mosapride, 8.0-32.0 microg/ml pantoprazole, 1.5-6.0 microg/ml mosapride degradation products and 2.0-8.0 microg/ml pantoprazole degradation products. The proposed bivariate and multivariate calibration methods were successfully applied to the determination of mosapride and pantoprazole in their pharmaceutical preparations.

  16. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  17. Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.

    PubMed

    Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias

    2016-12-01

    Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.

  18. Novel joint cupping clinical maneuver for ultrasonographic detection of knee joint effusions.

    PubMed

    Uryasev, Oleg; Joseph, Oliver C; McNamara, John P; Dallas, Apostolos P

    2013-11-01

    Knee effusions occur due to traumatic and atraumatic causes. Clinical diagnosis currently relies on several provocative techniques to demonstrate knee joint effusions. Portable bedside ultrasonography (US) is becoming an adjunct to diagnosis of effusions. We hypothesized that a US approach with a clinical joint cupping maneuver increases sensitivity in identifying effusions as compared to US alone. Using unembalmed cadaver knees, we injected fluid to create effusions up to 10 mL. Each effusion volume was measured in a lateral transverse location with respect to the patella. For each effusion we applied a joint cupping maneuver from an inferior approach, and re-measured the effusion. With increased volume of saline infusion, the mean depth of effusion on ultrasound imaging increased as well. Using a 2-mm cutoff, we visualized an effusion without the joint cupping maneuver at 2.5 mL and with the joint cupping technique at 1 mL. Mean effusion diameter increased on average 0.26 cm for the joint cupping maneuver as compared to without the maneuver. The effusion depth was statistically different at 2.5 and 7.5 mL (P < .05). Utilizing a joint cupping technique in combination with US is a valuable tool in assessing knee effusions, especially those of subclinical levels. Effusion measurements are complicated by uneven distribution of effusion fluid. A clinical joint cupping maneuver concentrates the fluid in one recess of the joint, increasing the likelihood of fluid detection using US. © 2013 Elsevier Inc. All rights reserved.

  19. Water: an essential but overlooked nutrient.

    PubMed

    Kleiner, S M

    1999-02-01

    Water is an essential nutrient required for life. To be well hydrated, the average sedentary adult man must consume at least 2,900 mL (12 c) fluid per day, and the average sedentary adult woman at least 2,200 mL (9 c) fluid per day, in the form of noncaffeinated, nonalcoholic beverages, soups, and foods. Solid foods contribute approximately 1,000 mL (4 c) water, with an additional 250 mL (1 c) coming from the water of oxidation. The Nationwide Food Consumption Surveys indicate that a portion of the population may be chronically mildly dehydrated. Several factors may increase the likelihood of chronic, mild dehydration, including a poor thirst mechanism, dissatisfaction with the taste of water, common consumption of the natural diuretics caffeine and alcohol, participation in exercise, and environmental conditions. Dehydration of as little as 2% loss of body weight results in impaired physiological and performance responses. New research indicates that fluid consumption in general and water consumption in particular can have an effect on the risk of urinary stone disease; cancers of the breast, colon, and urinary tract; childhood and adolescent obesity; mitral valve prolapse; salivary gland function; and overall health in the elderly. Dietitians should be encouraged to promote and monitor fluid and water intake among all of their clients and patients through education and to help them design a fluid intake plan. The influence of chronic mild dehydration on health and disease merits further research.

  20. Simultaneous determination of morphine, codeine and 6-acetyl morphine in human urine and blood samples using direct aqueous derivatisation: validation and application to real cases.

    PubMed

    Chericoni, S; Stefanelli, F; Iannella, V; Giusiani, M

    2014-02-15

    Opiates play a relevant role in forensic toxicology and their assay in urine or blood is usually performed for example in workplace drug-testing or toxicological investigation of drug impaired driving. The present work describes two new methods for detecting morphine, codeine and 6-monoacethyl morphine in human urine or blood using a single step derivatisation in aqueous phase. Propyl chloroformate is used as the dramatizing agent followed by liquid-liquid extraction and gas-chromatography-mass spectroscopy to detect the derivatives. The methods have been validated both for hydrolysed and unhydrolysed urine. For hydrolysed urine, the LOD and LOQ were 2.5ng/ml and 8.5ng/ml for codeine, and 5.2ng/ml and 15.1ng/ml for morphine, respectively. For unhydrolysed urine, the LOD and LOQ were 3.0ng/ml and 10.1ng/ml for codeine, 2.7ng/ml and 8.1ng/ml for morphine, 0.8ng/ml and 1.5ng/ml for 6-monoacetyl morphine, respectively. In blood, the LOD and LOQ were 0.44ng/ml and 1.46ng/ml for codeine, 0.29ng/ml and 0.98ng/ml for morphine, 0.15ng/ml and 0.51ng/ml for 6-monoacetyl morphine, respectively. The validated methods have been applied to 50 urine samples and 40 blood samples (both positive and negative) and they can be used in routine analyses. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  2. Predicting activities of daily living for cancer patients using an ontology-guided machine learning methodology.

    PubMed

    Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz

    2017-09-16

    Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P < 0.1) than methods without ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.

  3. Spectrofluorimetric determination of fluoroquinolones in pharmaceutical preparations.

    PubMed

    Ulu, Sevgi Tatar

    2009-02-01

    Simple, rapid and highly sensitive spectrofluorimetric method is presented for the determination of four fluoroquinolone (FQ) drugs, ciprofloxacin, enoxacin, norfloxacin and moxifloxacin in pharmaceutical preparations. Proposed method is based on the derivatization of FQ with 4-chloro-7-nitrobenzofurazan (NBD-Cl) in borate buffer of pH 9.0 to yield a yellow product. The optimum experimental conditions have been studied carefully. Beer's law is obeyed over the concentration range of 23.5-500 ng mL(-1) for ciprofloxacin, 28.5-700 ng mL(-1) for enoxacin, 29.5-800 ng mL(-1) for norfloxacin and 33.5-1000 ng mL(-1) for moxifloxacin using NBD-Cl reagent, respectively. The detection limits were found to be 7.0 ng mL(-1) for ciprofloxacin, 8.5 ng mL(-1) for enoxacin, 9.2 ng mL(-1) for norfloxacin and 9.98 ng mL(-1) for moxifloxacin, respectively. Intra-day and inter-day relative standard deviation and relative mean error values at three different concentrations were determined. The low relative standard deviation values indicate good precision and high recovery values indicate accuracy of the proposed methods. The method is highly sensitive and specific. The results obtained are in good agreement with those obtained by the official and reference method. The results presented in this report show that the applied spectrofluorimetric method is acceptable for the determination of the four FQ in the pharmaceutical preparations. Common excipients used as additives in pharmaceutical preparations do not interfere with the proposed method.

  4. Development and validation of spectrophotometric methods for estimating amisulpride in pharmaceutical preparations.

    PubMed

    Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti

    2010-01-01

    Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.

  5. Method for determination of levoglucosan in snow and ice at trace concentration levels using ultra-performance liquid chromatography coupled with triple quadrupole mass spectrometry.

    PubMed

    You, Chao; Song, Lili; Xu, Baiqing; Gao, Shaopeng

    2016-02-01

    A method is developed for determination of levoglucosan at trace concentration levels in complex matrices of snow and ice samples. This method uses an injection mixture comprising acetonitrile and melt sample at a ratio of 50/50 (v/v). Samples are analyzed using ultra-performance liquid chromatography system combined with triple tandem quadrupole mass spectrometry (UPLC-MS/MS). Levoglucosan is analyzed on BEH Amide column (2.1 mm × 100 mm, 1.7 um), and a Z-spray electrospray ionization source is used for levoglucosan ionization. The polyether sulfone filter is selected for filtrating insoluble particles due to less impact on levoglucosan. The matrix effect is evaluated by using a standard addition method. During the method validation, limit of detection (LOD), linearity, recovery, repeatability and reproducibility were evaluated using standard addition method. The LOD of this method is 0.11 ng mL(-1). Recoveries vary from 91.2% at 0.82 ng mL(-1) to 99.3% at 4.14 ng mL(-1). Repeatability ranges from 17.9% at a concentration of 0.82 ng mL(-1) to 2.8% at 4.14 ng mL(-1). Reproducibility ranges from 15.1% at a concentration of 0.82 ng mL(-1) to 1.9% at 4.14 ng mL(-1). This method can be implemented using less than 0.50 mL sample volume in low and middle latitude regions like the Tibetan Plateau. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. The Factor Structure of the Spiritual Well-Being Scale in Veterans Experienced Chemical Weapon Exposure.

    PubMed

    Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Boyle, Christopher; Yaghoobzadeh, Ameneh; Tahmasbi, Bahram; Rassool, G Hussein; Taebei, Mozhgan; Soleimani, Mohammad Ali

    2018-04-01

    This study aimed to determine the factor structure of the spiritual well-being among a sample of the Iranian veterans. In this methodological research, 211 male veterans of Iran-Iraq warfare completed the Paloutzian and Ellison spiritual well-being scale. Maximum likelihood (ML) with oblique rotation was used to assess domain structure of the spiritual well-being. The construct validity of the scale was assessed using confirmatory factor analysis (CFA), convergent validity, and discriminant validity. Reliability was evaluated with Cronbach's alpha, Theta (θ), and McDonald Omega (Ω) coefficients, intra-class correlation coefficient (ICC), and construct reliability (CR). Results of ML and CFA suggested three factors which were labeled "relationship with God," "belief in fate and destiny," and "life optimism." The ICC, coefficients of the internal consistency, and CR were >.7 for the factors of the scale. Convergent validity and discriminant validity did not fulfill the requirements. The Persian version of spiritual well-being scale demonstrated suitable validity and reliability among the veterans of Iran-Iraq warfare.

  7. Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.

  8. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    PubMed

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  9. Handwriting individualization using distance and rarity

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Srihari, Sargur; Srinivasan, Harish

    2012-01-01

    Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.

  10. The evolutionary history of holometabolous insects inferred from transcriptome-based phylogeny and comprehensive morphological data.

    PubMed

    Peters, Ralph S; Meusemann, Karen; Petersen, Malte; Mayer, Christoph; Wilbrandt, Jeanne; Ziesmann, Tanja; Donath, Alexander; Kjer, Karl M; Aspöck, Ulrike; Aspöck, Horst; Aberer, Andre; Stamatakis, Alexandros; Friedrich, Frank; Hünefeld, Frank; Niehuis, Oliver; Beutel, Rolf G; Misof, Bernhard

    2014-03-20

    Despite considerable progress in systematics, a comprehensive scenario of the evolution of phenotypic characters in the mega-diverse Holometabola based on a solid phylogenetic hypothesis was still missing. We addressed this issue by de novo sequencing transcriptome libraries of representatives of all orders of holometabolan insects (13 species in total) and by using a previously published extensive morphological dataset. We tested competing phylogenetic hypotheses by analyzing various specifically designed sets of amino acid sequence data, using maximum likelihood (ML) based tree inference and Four-cluster Likelihood Mapping (FcLM). By maximum parsimony-based mapping of the morphological data on the phylogenetic relationships we traced evolutionary transformations at the phenotypic level and reconstructed the groundplan of Holometabola and of selected subgroups. In our analysis of the amino acid sequence data of 1,343 single-copy orthologous genes, Hymenoptera are placed as sister group to all remaining holometabolan orders, i.e., to a clade Aparaglossata, comprising two monophyletic subunits Mecopterida (Amphiesmenoptera + Antliophora) and Neuropteroidea (Neuropterida + Coleopterida). The monophyly of Coleopterida (Coleoptera and Strepsiptera) remains ambiguous in the analyses of the transcriptome data, but appears likely based on the morphological data. Highly supported relationships within Neuropterida and Antliophora are Raphidioptera + (Neuroptera + monophyletic Megaloptera), and Diptera + (Siphonaptera + Mecoptera). ML tree inference and FcLM yielded largely congruent results. However, FcLM, which was applied here for the first time to large phylogenomic supermatrices, displayed additional signal in the datasets that was not identified in the ML trees. Our phylogenetic results imply that an orthognathous larva belongs to the groundplan of Holometabola, with compound eyes and well-developed thoracic legs, externally feeding on plants or fungi. Ancestral larvae of Aparaglossata were prognathous, equipped with single larval eyes (stemmata), and possibly agile and predacious. Ancestral holometabolan adults likely resembled in their morphology the groundplan of adult neopteran insects. Within Aparaglossata, the adult's flight apparatus and ovipositor underwent strong modifications. We show that the combination of well-resolved phylogenies obtained by phylogenomic analyses and well-documented extensive morphological datasets is an appropriate basis for reconstructing complex morphological transformations and for the inference of evolutionary histories.

  11. A gas chromatographic method for the determination of bicarbonate and dissolved gases

    USDA-ARS?s Scientific Manuscript database

    A gas chromatographic method for the rapid determination of aqueous carbon dioxide and its speciation into solvated carbon dioxide and bicarbonate is presented. One-half mL samples are injected through a rubber septum into 20-mL vials that are filled with 9.5 mL of 0.1 N HCl. A one mL portion of the...

  12. Diagnostic value of tumor markers for lung adenocarcinoma-associated malignant pleural effusion: a validation study and meta-analysis.

    PubMed

    Feng, Mei; Zhu, Jing; Liang, Liqun; Zeng, Ni; Wu, Yanqiu; Wan, Chun; Shen, Yongchun; Wen, Fuqiang

    2017-04-01

    Pleural effusion is one of the most common complications of lung adenocarcinoma and is diagnostically challenging. This study aimed to investigate the diagnostic performance of carcinoembryonic antigen (CEA), cytokeratin fragment (CYFRA) 21-1, and cancer antigen (CA) 19-9 for lung adenocarcinoma-associated malignant pleural effusion (MPE) through a validation study and meta-analysis. Pleural effusion samples were collected from 81 lung adenocarcinoma-associated MPEs and 96 benign pleural effusions. CEA, CYFRA 21-1, and CA19-9 were measured by electrochemiluminescence immunoassay. The capacity of tumor markers was assessed with receiver operating characteristic curve analyses and the area under the curve (AUC) was calculated. Standard methods for meta-analysis of diagnostic studies were used to summarize the diagnostic performance of CEA, CYFRA 21-1, and CA19-9 for lung adenocarcinoma-associated MPE. The pleural levels of CEA, CYFRA 21-1, and CA19-9 were significantly increased in lung adenocarcinoma-associated MPE compared to benign pleural effusion. The cut-off points for CEA, CYFRA 21-1, and CA19-9 were optimally set at 4.55 ng/ml, 43.10 μg/ml, and 12.89 U/ml, and corresponding AUCs were 0.93, 0.85, and 0.81, respectively. The combination of CEA, CYFRA 21-1, and CA19-9 increased the sensitivity to 95.06%, with an AUC of 0.95. Eight studies were included in this meta-analysis. CEA showed the best diagnostic performance with pooled sensitivity, specificity, positive/negative likelihood ratio, and diagnostic odds ratio of 0.75, 0.96, 16.01, 0.23, and 81.49, respectively. The AUC was 0.93. CEA, CYFRA 21-1, and CA19-9 play a role in the diagnosis of lung adenocarcinoma-associated MPE. The combination of these tumor markers increases the diagnostic accuracy.

  13. Population pharmacokinetics and dosing regimen design of milrinone in preterm infants

    PubMed Central

    Paradisis, Mary; Jiang, Xuemin; McLachlan, Andrew J; Evans, Nick; Kluckow, Martin; Osborn, David

    2007-01-01

    Aims To define the pharmacokinetics of milrinone in very preterm infants and determine an optimal dose regimen to prevent low systemic blood flow in the first 12 h after birth. Methods A prospective open‐labelled, dose‐escalation pharmacokinetic study was undertaken in two stages. In stage one, infants received milrinone at 0.25 μg/kg/min (n = 8) and 0.5 μg/kg/min (n = 11) infused from 3 to 24 h of age. Infants contributed 4–5 blood samples for concentration–time data which were analysed using a population modelling approach. A simulation study was used to explore the optimal dosing regimen to achieve target milrinone concentrations (180–300 ng/ml). This milrinone regimen was evaluated in stage two (n = 10). Results Infants (n = 29) born before 29 weeks gestation were enrolled. Milrinone pharmacokinetics were described using a one‐compartment model with first‐order elimination rate, with a population mean clearance (CV%) of 35 ml/h (24%) and volume of distribution of 512 ml (21%) and estimated half‐life of 10 h. The 0.25 and 0.5 μg/kg/min dosage regimens did not achieve optimal milrinone concentration‐time profiles to prevent early low systemic blood flow. Simulation studies predicted a loading infusion (0.75 μg/kg/min for 3 h) followed by maintenance infusion (0.2 μg/kg/min until 18 h of age) would provide an optimal milrinone concentration profile. This was confirmed in stage two of the study. Conclusion Population pharmacokinetic modelling in the preterm infant has established an optimal dose regimen for milrinone that increases the likelihood of achieving therapeutic aims and highlights the importance of pharmacokinetic studies in neonatal clinical pharmacology. PMID:16690639

  14. Prevalence of Propionibacterium acnes in Intervertebral Discs of Patients Undergoing Lumbar Microdiscectomy: A Prospective Cross-Sectional Study.

    PubMed

    Capoor, Manu N; Ruzicka, Filip; Machackova, Tana; Jancalek, Radim; Smrcka, Martin; Schmitz, Jonathan E; Hermanova, Marketa; Sana, Jiri; Michu, Elleni; Baird, John C; Ahmed, Fahad S; Maca, Karel; Lipina, Radim; Alamin, Todd F; Coscia, Michael F; Stonemetz, Jerry L; Witham, Timothy; Ehrlich, Garth D; Gokaslan, Ziya L; Mavrommatis, Konstantinos; Birkenmaier, Christof; Fischetti, Vincent A; Slaby, Ondrej

    2016-01-01

    The relationship between intervertebral disc degeneration and chronic infection by Propionibacterium acnes is controversial with contradictory evidence available in the literature. Previous studies investigating these relationships were under-powered and fraught with methodical differences; moreover, they have not taken into consideration P. acnes' ability to form biofilms or attempted to quantitate the bioburden with regard to determining bacterial counts/genome equivalents as criteria to differentiate true infection from contamination. The aim of this prospective cross-sectional study was to determine the prevalence of P. acnes in patients undergoing lumbar disc microdiscectomy. The sample consisted of 290 adult patients undergoing lumbar microdiscectomy for symptomatic lumbar disc herniation. An intraoperative biopsy and pre-operative clinical data were taken in all cases. One biopsy fragment was homogenized and used for quantitative anaerobic culture and a second was frozen and used for real-time PCR-based quantification of P. acnes genomes. P. acnes was identified in 115 cases (40%), coagulase-negative staphylococci in 31 cases (11%) and alpha-hemolytic streptococci in 8 cases (3%). P. acnes counts ranged from 100 to 9000 CFU/ml with a median of 400 CFU/ml. The prevalence of intervertebral discs with abundant P. acnes (≥ 1x103 CFU/ml) was 11% (39 cases). There was significant correlation between the bacterial counts obtained by culture and the number of P. acnes genomes detected by real-time PCR (r = 0.4363, p<0.0001). In a large series of patients, the prevalence of discs with abundant P. acnes was 11%. We believe, disc tissue homogenization releases P. acnes from the biofilm so that they can then potentially be cultured, reducing the rate of false-negative cultures. Further, quantification study revealing significant bioburden based on both culture and real-time PCR minimize the likelihood that observed findings are due to contamination and supports the hypothesis P. acnes acts as a pathogen in these cases of degenerative disc disease.

  15. Which adherence measure - self-report, clinician recorded or pharmacy refill - is best able to predict detectable viral load in a public ART programme without routine plasma viral load monitoring?

    PubMed

    Mekuria, Legese A; Prins, Jan M; Yalew, Alemayehu W; Sprangers, Mirjam A G; Nieuwkerk, Pythia T

    2016-07-01

    Combination antiretroviral therapy (cART) suppresses viral replication to an undetectable level if a sufficiently high level of adherence is achieved. We investigated which adherence measurement best distinguishes between patients with and without detectable viral load in a public ART programme without routine plasma viral load monitoring. We randomly selected 870 patients who started cART between May 2009 and April 2012 in 10 healthcare facilities in Addis Ababa, Ethiopia. Six hundred and sixty-four (76.3%) patients who were retained in HIV care and were receiving cART for at least 6 months were included and 642 had their plasma HIV-1 RNA concentration measured. Patients' adherence to cART was assessed according to self-report, clinician recorded and pharmacy refill measures. Multivariate logistic regression model was fitted to identify the predictors of detectable viremia. Model accuracy was evaluated by computing the area under the receiver operating characteristic (ROC) curve. A total of 9.2% and 5.5% of the 642 patients had a detectable viral load of ≥40 and ≥400 RNA copies/ml, respectively. In the multivariate analyses, younger age, lower CD4 cell count at cART initiation, being illiterate and widowed, and each of the adherence measures were significantly and independently predictive of having ≥400 RNA copies/ml. The ROC curve showed that these variables altogether had a likelihood of more than 80% to distinguish patients with a plasma viral load of ≥400 RNA copies/ml from those without. Adherence to cART was remarkably high. Self-report, clinician recorded and pharmacy refill non-adherence were all significantly predictive of detectable viremia. The choice for one of these methods to detect non-adherence and predict a detectable viral load can therefore be based on what is most practical in a particular setting. © 2016 John Wiley & Sons Ltd.

  16. Standardized volume rendering for magnetic resonance angiography measurements in the abdominal aorta.

    PubMed

    Persson, A; Brismar, T B; Lundström, C; Dahlström, N; Othberg, F; Smedby, O

    2006-03-01

    To compare three methods for standardizing volume rendering technique (VRT) protocols by studying aortic diameter measurements in magnetic resonance angiography (MRA) datasets. Datasets from 20 patients previously examined with gadolinium-enhanced MRA and with digital subtraction angiography (DSA) for abdominal aortic aneurysm were retrospectively evaluated by three independent readers. The MRA datasets were viewed using VRT with three different standardized transfer functions: the percentile method (Pc-VRT), the maximum-likelihood method (ML-VRT), and the partial range histogram method (PRH-VRT). The aortic diameters obtained with these three methods were compared with freely chosen VRT parameters (F-VRT) and with maximum intensity projection (MIP) concerning inter-reader variability and agreement with the reference method DSA. F-VRT parameters and PRH-VRT gave significantly higher diameter values than DSA, whereas Pc-VRT gave significantly lower values than DSA. The highest interobserver variability was found for F-VRT parameters and MIP, and the lowest for Pc-VRT and PRH-VRT. All standardized VRT methods were significantly superior to both MIP and F-VRT in this respect. The agreement with DSA was best for PRH-VRT, which was the only method with a mean error below 1 mm and which also had the narrowest limits of agreement (95% of cases between 2.1 mm below and 3.1 mm above DSA). All the standardized VRT methods compare favorably with MIP and VRT with freely selected parameters as regards interobserver variability. The partial range histogram method, although systematically overestimating vessel diameters, gives results closest to those of DSA.

  17. Soluble CD30 in patients with antibody-mediated rejection of the kidney allograft.

    PubMed

    Slavcev, Antonij; Honsova, Eva; Lodererova, Alena; Pavlova, Yelena; Sajdlova, Helena; Vitko, Stefan; Skibova, Jelena; Striz, Ilja; Viklicky, Ondrej

    2007-07-01

    The aim of our retrospective study was to evaluate the clinical significance of measurement of the soluble CD30 (sCD30) molecule for the prediction of antibody-mediated (humoral) rejection (HR). Sixty-two kidney transplant recipients (thirty-one C4d-positive and thirty-one C4d-negative patients) were included into the study. Soluble CD30 levels were evaluated before transplantation and during periods of graft function deterioration. The median concentrations of the sCD30 molecule were identical in C4d-positive and C4d-negative patients before and after transplantation (65.5 vs. 65.0 and 28.2 vs. 36.0 U/ml, respectively). C4d+ patients who developed DSA de novo had a tendency to have higher sCD30 levels before transplantation (80.7+/-53.6 U/ml, n=8) compared with C4d-negative patients (65.0+/-33.4 U/ml, n=15). Soluble CD30 levels were evaluated as positive and negative (>or=100 U/ml and <100 U/ml respectively) and the sensitivity, specificity and accuracy of sCD30 estimation with regard to finding C4d deposits in peritubular capillaries were determined. The sensitivity of sCD30+ testing was generally below 40%, while the specificity of the test, i.e. the likelihood that if sCD30 testing is negative, C4d deposits would be absent, was 82%. C4d+ patients who developed DSA de novo were evaluated separately; the specificity of sCD30 testing for the incidence of HR in this cohort was 86%. We could not confirm in our study that high sCD30 levels (>or=100 U/ml) might be predictive for the incidence of HR. Negative sCD30 values might be however helpful for identifying patients with a low risk for development of DSA and antibody-mediated rejection.

  18. Influence function for robust phylogenetic reconstructions.

    PubMed

    Bar-Hen, Avner; Mariadassou, Mahendra; Poursat, Marie-Anne; Vandenkoornhuyse, Philippe

    2008-05-01

    Based on the computation of the influence function, a tool to measure the impact of each piece of sampled data on the statistical inference of a parameter, we propose to analyze the support of the maximum-likelihood (ML) tree for each site. We provide a new tool for filtering data sets (nucleotides, amino acids, and others) in the context of ML phylogenetic reconstructions. Because different sites support different phylogenic topologies in different ways, outlier sites, that is, sites with a very negative influence value, are important: they can drastically change the topology resulting from the statistical inference. Therefore, these outlier sites must be clearly identified and their effects accounted for before drawing biological conclusions from the inferred tree. A matrix containing 158 fungal terminals all belonging to Chytridiomycota, Zygomycota, and Glomeromycota is analyzed. We show that removing the strongest outlier from the analysis strikingly modifies the ML topology, with a loss of as many as 20% of the internal nodes. As a result, estimating the topology on the filtered data set results in a topology with enhanced bootstrap support. From this analysis, the polyphyletic status of the fungal phyla Chytridiomycota and Zygomycota is reinforced, suggesting the necessity of revisiting the systematics of these fungal groups. We show the ability of influence function to produce new evolution hypotheses.

  19. Does the choice of nucleotide substitution models matter topologically?

    PubMed

    Hoff, Michael; Orf, Stefan; Riehm, Benedikt; Darriba, Diego; Stamatakis, Alexandros

    2016-03-24

    In the context of a master level programming practical at the computer science department of the Karlsruhe Institute of Technology, we developed and make available an open-source code for testing all 203 possible nucleotide substitution models in the Maximum Likelihood (ML) setting under the common Akaike, corrected Akaike, and Bayesian information criteria. We address the question if model selection matters topologically, that is, if conducting ML inferences under the optimal, instead of a standard General Time Reversible model, yields different tree topologies. We also assess, to which degree models selected and trees inferred under the three standard criteria (AIC, AICc, BIC) differ. Finally, we assess if the definition of the sample size (#sites versus #sites × #taxa) yields different models and, as a consequence, different tree topologies. We find that, all three factors (by order of impact: nucleotide model selection, information criterion used, sample size definition) can yield topologically substantially different final tree topologies (topological difference exceeding 10 %) for approximately 5 % of the tree inferences conducted on the 39 empirical datasets used in our study. We find that, using the best-fit nucleotide substitution model may change the final ML tree topology compared to an inference under a default GTR model. The effect is less pronounced when comparing distinct information criteria. Nonetheless, in some cases we did obtain substantial topological differences.

  20. The influence of fatness on the likelihood of early-winter pregnancy in muskoxen (Ovibos moschatus).

    PubMed

    Adamczewski, J Z; Fargey, P J; Laarveld, B; Gunn, A; Flood, P F

    1998-09-01

    Among wild ruminants, muskoxen have an exceptional ability to fatten, but their pregnancy rates are variable and often low. To test whether the likelihood of pregnancy in muskoxen is associated with exceptionally good body condition, we used logistic regression analysis with data from 32 pregnant and 18 nonpregnant muskoxen > or = 1.5 yr of age shot in November (1989 to 1992) on Victoria Island in Arctic Canada. We assayed their serum for insulin-like growth factor-1 (IGF-1). All fatness and mass measures were positively related to the likelihood of pregnancy (P < 0.001), with the strongest associations for estimated total fat mass (80% of outcomes predicted correctly) and kidney fat mass (77%), and weaker models for body mass. Pregnancy was less likely to occur in lactating females than in nonlactating ones (P = 0.03). Although IGF-1 concentrations were higher (P = 0.001) in nonlactating females than in lactating ones (28.7 +/- 1.7 vs. 22.5 ng/ml), no association with pregnancy was detected (P = 0.57). Fatness associated with a 50% probability of pregnancy in muskoxen (22% of ingesta-free body mass or 32 kg fat in females > 3.5 yr old) is much higher than in caribou and somewhat higher than in cattle, and this may partly account for the low calving rates often observed in this species.

  1. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less

  2. Five methods of breast volume measurement: a comparative study of measurements of specimen volume in 30 mastectomy cases.

    PubMed

    Kayar, Ragip; Civelek, Serdar; Cobanoglu, Murat; Gungor, Osman; Catal, Hidayet; Emiroglu, Mustafa

    2011-03-27

    To compare breast volume measurement techniques in terms of accuracy, convenience, and cost. Breast volumes of 30 patients who were scheduled to undergo total mastectomy surgery were measured preoperatively by using five different methods (mammography, anatomic [anthropometric], thermoplastic casting, the Archimedes procedure, and the Grossman-Roudner device). Specimen volume after total mastectomy was measured in each patient with the water displacement method (Archimedes). The results were compared statistically with the values obtained by the five different methods. The mean mastectomy specimen volume was 623.5 (range 150-1490) mL. The breast volume values were established to be 615.7 mL (r = 0.997) with the mammographic method, 645.4 mL (r = 0.975) with the anthropometric method, 565.8 mL (r = 0.934) with the Grossman-Roudner device, 583.2 mL (r = 0.989) with the Archimedes procedure, and 544.7 mL (r = 0.94) with the casting technique. Examination of r values revealed that the most accurate method was mammography for all volume ranges, followed by the Archimedes method. The present study demonstrated that the most accurate method of breast volume measurement is mammography, followed by the Archimedes method. However, when patient comfort, ease of application, and cost were taken into consideration, the Grossman-Roudner device and anatomic measurement were relatively less expensive, and easier methods with an acceptable degree of accuracy.

  3. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  4. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  5. Diagnostic value of soluble B7-H4 and carcinoembryonic antigen in distinguishing malignant from benign pleural effusion.

    PubMed

    Jing, Xiaogang; Wei, Fei; Li, Jing; Dai, Lingling; Wang, Xi; Jia, Liuqun; Wang, Huan; An, Lin; Yang, Yuanjian; Zhang, Guojun; Cheng, Zhe

    2018-03-01

    To explore the diagnostic value of joint detection of soluble B7-H4 (sB7-H4) and carcinoembryonic antigen (CEA) in identifying malignant pleural effusion (MPE) from benign pleural effusion (BPE). A total of 97 patients with pleural effusion specimens were enrolled from The First Affiliated Hospital of Zhengzhou University between June 2014 and December 2015. All cases were categorized into malignant pleural effusion group (n = 55) and benign pleural effusion group (n = 42) according to etiologies. Enzyme-linked immunosorbent assay was applied to examine the levels of sB7-H4 in pleural effusion and meanwhile CEA concentrations were detected by electro-chemiluminescence immunoassays. Receiver operating characteristic (ROC) curve was established to assess the diagnostic value of sB7-H4 and CEA in pleural effusion. The correlation between sB7-H4 and CEA levels was analyzed by Pearson's product-moment. The concentrations of sB7-H4 and CEA in MPE exhibited obviously higher than those of BPE ([60.08 ± 35.04] vs. [27.26 ± 9.55] ng/ml, P = .000; [41.49 ± 37.16] vs. [2.41 ± 0.94] ng/ml, P = .000). The AUC area under ROC curve of sB7-H4 and CEA was 0.884 and 0.954, respectively. Two cutoff values by ROC curve analysis of sB7-H4 36.5 ng/ml and CEA 4.18 ng/ml were obtained, with a corresponding sensitivity (81.82%, 87.28%), specificity (90.48%, 95.24%), accuracy (85.57%, 90.72%), positive predictive value (PPV) (91.84%, 96.0%), negative predictive value (NPV) (79.17%, 85.11%), positive likelihood ratio (PLR) (8.614, 18.327), and negative likelihood ratio (NLR) (0.201, 0.134). When sB7-H4 and CEA were combined to detect pleural effusion, it obtained a higher sensitivity 90.91% and specificity 97.62%. Furthermore, correlation analysis result showed that the level of sB7-H4 was correlated with CEA level (r = .770, P = .000). sB7-H4 was a potentially valuable tumor marker in the differentiation between BPE and MPE. The combined detection of sB7-H4 and CEA could improve the diagnostic sensitivity and specificity for MPE. © 2017 John Wiley & Sons Ltd.

  6. Development and Validation of HPLC Method for Determination of Crocetin, a constituent of Saffron, in Human Serum Samples

    PubMed Central

    Mohammadpour, Amir Hooshang; Ramezani, Mohammad; Tavakoli Anaraki, Nasim; Malaekeh-Nikouei, Bizhan; Amel Farzad, Sara; Hosseinzadeh, Hossein

    2013-01-01

    Objective(s): The present study reports the development and validation of a sensitive and rapid extraction method beside high performance liquid chromatographic method for the determination of crocetin in human serum. Materials and Methods: The HPLC method was carried out by using a C18 reversed-phase column and a mobile phase composed of methanol/water/acetic acid (85:14.5:0.5 v/v/v) at the flow rate of 0.8 ml/min. The UV detector was set at 423 nm and 13-cis retinoic acid was used as the internal standard. Serum samples were pretreated with solid-phase extraction using Bond Elut C18 (200mg) cartridges or with direct precipitation using acetonitrile. Results: The calibration curves were linear over the range of 0.05-1.25 µg/ml for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction. The mean recoveries of crocetin over a concentration range of 0.05-5 µg/ml serum for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction were above 70 % and 60 %, respectively. The intraday coefficients of variation were 0.37- 2.6% for direct precipitation method and 0.64 - 5.43% for solid-phase extraction. The inter day coefficients of variation were 1.69 – 6.03% for direct precipitation method and 5.13-12.74% for solid-phase extraction, respectively. The lower limit of quantification for crocetin was 0.05 µg/ml for direct precipitation method and 0.5 µg/ml for solid-phase extraction. Conclusion: The validated direct precipitation method for HPLC satisfied all of the criteria that were necessary for a bioanalytical method and could reliably quantitate crocetin in human serum for future clinical pharmacokinetic study. PMID:23638292

  7. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  8. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  9. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  10. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  11. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less

  13. Phylogeny and divergence of the pinnipeds (Carnivora: Mammalia) assessed using a multigene dataset

    PubMed Central

    Higdon, Jeff W; Bininda-Emonds, Olaf RP; Beck, Robin MD; Ferguson, Steven H

    2007-01-01

    Background Phylogenetic comparative methods are often improved by complete phylogenies with meaningful branch lengths (e.g., divergence dates). This study presents a dated molecular supertree for all 34 world pinniped species derived from a weighted matrix representation with parsimony (MRP) supertree analysis of 50 gene trees, each determined under a maximum likelihood (ML) framework. Divergence times were determined by mapping the same sequence data (plus two additional genes) on to the supertree topology and calibrating the ML branch lengths against a range of fossil calibrations. We assessed the sensitivity of our supertree topology in two ways: 1) a second supertree with all mtDNA genes combined into a single source tree, and 2) likelihood-based supermatrix analyses. Divergence dates were also calculated using a Bayesian relaxed molecular clock with rate autocorrelation to test the sensitivity of our supertree results further. Results The resulting phylogenies all agreed broadly with recent molecular studies, in particular supporting the monophyly of Phocidae, Otariidae, and the two phocid subfamilies, as well as an Odobenidae + Otariidae sister relationship; areas of disagreement were limited to four more poorly supported regions. Neither the supertree nor supermatrix analyses supported the monophyly of the two traditional otariid subfamilies, supporting suggestions for the need for taxonomic revision in this group. Phocid relationships were similar to other recent studies and deeper branches were generally well-resolved. Halichoerus grypus was nested within a paraphyletic Pusa, although relationships within Phocina tend to be poorly supported. Divergence date estimates for the supertree were in good agreement with other studies and the available fossil record; however, the Bayesian relaxed molecular clock divergence date estimates were significantly older. Conclusion Our results join other recent studies and highlight the need for a re-evaluation of pinniped taxonomy, especially as regards the subfamilial classification of otariids and the generic nomenclature of Phocina. Even with the recent publication of new sequence data, the available genetic sequence information for several species, particularly those in Arctocephalus, remains very limited, especially for nuclear markers. However, resolution of parts of the tree will probably remain difficult, even with additional data, due to apparent rapid radiations. Our study addresses the lack of a recent pinniped phylogeny that includes all species and robust divergence dates for all nodes, and will therefore prove indispensable to comparative and macroevolutionary studies of this group of carnivores. PMID:17996107

  14. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  15. Reconstructing the evolutionary history of the Lorisidae using morphological, molecular, and geological data.

    PubMed

    Masters, J C; Anthony, N M; de Wit, M J; Mitchell, A

    2005-08-01

    Major aspects of lorisid phylogeny and systematics remain unresolved, despite several studies (involving morphology, histology, karyology, immunology, and DNA sequencing) aimed at elucidating them. Our study is the first to investigate the evolution of this enigmatic group using molecular and morphological data for all four well-established genera: Arctocebus, Loris, Nycticebus, and Perodicticus. Data sets consisting of 386 bp of 12S rRNA, 535 bp of 16S rRNA, and 36 craniodental characters were analyzed separately and in combination, using maximum parsimony and maximum likelihood. Outgroups, consisting of two galagid taxa (Otolemur and Galagoides) and a lemuroid (Microcebus), were also varied. The morphological data set yielded a paraphyletic lorisid clade with the robust Nycticebus and Perodicticus grouped as sister taxa, and the galagids allied with Arctocebus. All molecular analyses maximum parsimony (MP) or maximum likelihood (ML) which included Microcebus as an outgroup rendered a paraphyletic lorisid clade, with one exception: the 12S + 16S data set analyzed with ML. The position of the galagids in these paraphyletic topologies was inconsistent, however, and bootstrap values were low. Exclusion of Microcebus generated a monophyletic Lorisidae with Asian and African subclades; bootstrap values for all three clades in the total evidence tree were over 90%. We estimated mean genetic distances for lemuroids vs. lorisoids, lorisids vs. galagids, and Asian vs. African lorisids as a guide to relative divergence times. We present information regarding a temporary land bridge that linked the two now widely separated regions inhabited by lorisids that may explain their distribution. Finally, we make taxonomic recommendations based on our results. (c) 2005 Wiley-Liss, Inc.

  16. Plastid phylogenomics of the cool-season grass subfamily: clarification of relationships among early-diverging tribes

    PubMed Central

    Saarela, Jeffery M.; Wysocki, William P.; Barrett, Craig F.; Soreng, Robert J.; Davis, Jerrold I.; Clark, Lynn G.; Kelchner, Scot A.; Pires, J. Chris; Edger, Patrick P.; Mayfield, Dustin R.; Duvall, Melvin R.

    2015-01-01

    Whole plastid genomes are being sequenced rapidly from across the green plant tree of life, and phylogenetic analyses of these are increasing resolution and support for relationships that have varied among or been unresolved in earlier single- and multi-gene studies. Pooideae, the cool-season grass lineage, is the largest of the 12 grass subfamilies and includes important temperate cereals, turf grasses and forage species. Although numerous studies of the phylogeny of the subfamily have been undertaken, relationships among some ‘early-diverging’ tribes conflict among studies, and some relationships among subtribes of Poeae have not yet been resolved. To address these issues, we newly sequenced 25 whole plastomes, which showed rearrangements typical of Poaceae. These plastomes represent 9 tribes and 11 subtribes of Pooideae, and were analysed with 20 existing plastomes for the subfamily. Maximum likelihood (ML), maximum parsimony (MP) and Bayesian inference (BI) robustly resolve most deep relationships in the subfamily. Complete plastome data provide increased nodal support compared with protein-coding data alone at nodes that are not maximally supported. Following the divergence of Brachyelytrum, Phaenospermateae, Brylkinieae–Meliceae and Ampelodesmeae–Stipeae are the successive sister groups of the rest of the subfamily. Ampelodesmeae are nested within Stipeae in the plastome trees, consistent with its hybrid origin between a phaenospermatoid and a stipoid grass (the maternal parent). The core Pooideae are strongly supported and include Brachypodieae, a Bromeae–Triticeae clade and Poeae. Within Poeae, a novel sister group relationship between Phalaridinae and Torreyochloinae is found, and the relative branching order of this clade and Aveninae, with respect to an Agrostidinae–Brizinae clade, are discordant between MP and ML/BI trees. Maximum likelihood and Bayesian analyses strongly support Airinae and Holcinae as the successive sister groups of a Dactylidinae–Loliinae clade. PMID:25940204

  17. Development and Validation of HPLC Method for Determination of Crocetin, a constituent of Saffron, in Human Serum Samples.

    PubMed

    Mohammadpour, Amir Hooshang; Ramezani, Mohammad; Tavakoli Anaraki, Nasim; Malaekeh-Nikouei, Bizhan; Amel Farzad, Sara; Hosseinzadeh, Hossein

    2013-01-01

    The present study reports the development and validation of a sensitive and rapid extraction method beside high performance liquid chromatographic method for the determination of crocetin in human serum. The HPLC method was carried out by using a C18 reversed-phase column and a mobile phase composed of methanol/water/acetic acid (85:14.5:0.5 v/v/v) at the flow rate of 0.8 ml/min. The UV detector was set at 423 nm and 13-cis retinoic acid was used as the internal standard. Serum samples were pretreated with solid-phase extraction using Bond Elut C18 (200mg) cartridges or with direct precipitation using acetonitrile. The calibration curves were linear over the range of 0.05-1.25 µg/ml for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction. The mean recoveries of crocetin over a concentration range of 0.05-5 µg/ml serum for direct precipitation method and 0.5-5 µg/ml for solid-phase extraction were above 70 % and 60 %, respectively. The intraday coefficients of variation were 0.37- 2.6% for direct precipitation method and 0.64 - 5.43% for solid-phase extraction. The inter day coefficients of variation were 1.69 - 6.03% for direct precipitation method and 5.13-12.74% for solid-phase extraction, respectively. The lower limit of quantification for crocetin was 0.05 µg/ml for direct precipitation method and 0.5 µg/ml for solid-phase extraction. The validated direct precipitation method for HPLC satisfied all of the criteria that were necessary for a bioanalytical method and could reliably quantitate crocetin in human serum for future clinical pharmacokinetic study.

  18. Spectrophotometric Methods for the Determination of Sitagliptin and Vildagliptin in Bulk and Dosage Forms

    PubMed Central

    El-Bagary, Ramzia I.; Elkady, Ehab F.; Ayoub, Bassam M.

    2011-01-01

    Simple, accurate and precise spectrophotometric methods have been developed for the determination of sitagliptin and vildagliptin in bulk and dosage forms. The proposed methods are based on the charge transfer complexes of sitagliptin phosphate and vildagliptin with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ), 7,7,8,8-tetracyanoquinodimethane (TCNQ) and tetrachloro-1,4-benzoquinone (p-chloranil). All the variables were studied to optimize the reactions conditions. For sitagliptin, Beer’s law was obeyed in the concentration ranges of 50-300 μg/ml, 20-120 μg/ml and 100-900 μg/ml with DDQ, TCNQ and p-chloranil, respectively. For vildagliptin, Beer’s law was obeyed in the concentration ranges of 50-300 μg/ml, 10-85 μg/ml and 50-350 μg/ml with DDQ, TCNQ and p-chloranil, respectively. The developed methods were validated and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms. PMID:23675221

  19. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  20. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  1. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  2. Spectrophotometric and spectrofluorimetric determination of indacaterol maleate in pure form and pharmaceutical preparations: application to content uniformity.

    PubMed

    El-Ashry, S M; El-Wasseef, D R; El-Sherbiny, D T; Salem, Y A

    2015-09-01

    Two simple, rapid, sensitive and precise spectrophotometric and spectrofluorimetric methods were developed for the determination of indacaterol maleate in bulk powder and capsules. Both methods were based on the direct measurement of the drug in methanol. In the spectrophotometric method (Method I) the absorbance was measured at 259 nm. The absorbance-concentration plot was rectilinear over the range 1.0-10.0 µg mL(-1) with a lower detection limit (LOD) of 0.078 µg mL(-1) and lower quantification limit (LOQ) of 0.238 µg mL(-1). Meanwhile in the spectrofluorimetric method (Method II) the native fluorescence was measured at 358 nm after excitation at 258 nm. The fluorescence-concentration plot was rectilinear over the range of 1.0-40.0 ng mL(-1) with an LOD of 0.075 ng mL(-1) and an LOQ of 0.226 ng mL(-1). The proposed methods were successfully applied to the determination of indacaterol maleate in capsules with average percent recoveries ± RSD% of 99.94 ± 0.96 for Method I and 99.97 ± 0.81 for Method II. In addition, the proposed methods were extended to a content uniformity test according to the United States Pharmacopoeia (USP) guidelines and were accurate, precise for the capsules studied with acceptance value 3.98 for Method I and 2.616 for Method II. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  4. A general probabilistic model for group independent component analysis and its estimation methods

    PubMed Central

    Guo, Ying

    2012-01-01

    SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789

  5. Semiautomatic three-dimensional CT ventricular volumetry in patients with congenital heart disease: agreement between two methods with different user interaction.

    PubMed

    Goo, Hyun Woo; Park, Sang-Hyub

    2015-12-01

    To assess agreement between two semi-automatic, three-dimensional (3D) computed tomography (CT) ventricular volumetry methods with different user interactions in patients with congenital heart disease. In 30 patients with congenital heart disease (median age 8 years, range 5 days-33 years; 20 men), dual-source, multi-section, electrocardiography-synchronized cardiac CT was obtained at the end-systolic (n = 22) and/or end-diastolic (n = 28) phase. Nineteen left ventricle end-systolic (LV ESV), 28 left ventricle end-diastolic (LV EDV), 22 right ventricle end-systolic (RV ESV), and 28 right ventricle end-diastolic volumes (RV EDV) were successfully calculated using two semi-automatic, 3D segmentation methods with different user interactions (high in method 1, low in method 2). The calculated ventricular volumes of the two methods were compared and correlated. A P value <0.05 was considered statistically significant. LV ESV (35.95 ± 23.49 ml), LV EDV (88.76 ± 61.83 ml), and RV ESV (46.87 ± 47.39 ml) measured by method 2 were slightly but significantly smaller than those measured by method 1 (41.25 ± 26.94 ml, 92.20 ± 62.69 ml, 53.61 ± 50.08 ml for LV ESV, LV EDV, and RV ESV, respectively; P ≤ 0.02). In contrast, no statistically significant difference in RV EDV (122.57 ± 88.57 ml in method 1, 123.83 ± 89.89 ml in method 2; P = 0.36) was found between the two methods. All ventricular volumes showed very high correlation (R = 0.978, 0.993, 0.985, 0.997 for LV ESV, LV EDV, RV ESV, and RV EDV, respectively; P < 0.001) between the two methods. In patients with congenital heart disease, 3D CT ventricular volumetry shows good agreement and high correlation between the two methods, but method 2 tends to slightly underestimate LV ESV, LV EDV, and RV ESV.

  6. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  7. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  8. [Determination of aluminium in flour foods with photometric method].

    PubMed

    Ma, Lan; Zhao, Xin; Zhou, Shuang; Yang, Dajin

    2012-05-01

    To establish a determination method for aluminium in flour foods with photometric method. After samples being treated with microwave digestion and wet digestion, aluminium in staple flour foods was determined by photometric method. There was a good linearity of the result in the range of 0.25 - 5.0 microg/ml aluminium, r = 0.9998; limit of detection (LOD) : 2.3 ng/ml; limit of quantitation (LOQ) : 7 ng/ml. This method of determining aluminium in flour foods is simple, rapid and reliable.

  9. Microelectrode Recordings Validate the Clinical Visualization of Subthalamic-Nucleus Based on 7T Magnetic Resonance Imaging and Machine Learning for Deep Brain Stimulation Surgery.

    PubMed

    Shamir, Reuben R; Duchin, Yuval; Kim, Jinyoung; Patriat, Remi; Marmor, Odeya; Bergman, Hagai; Vitek, Jerrold L; Sapiro, Guillermo; Bick, Atira; Eliahou, Ruth; Eitan, Renana; Israel, Zvi; Harel, Noam

    2018-05-24

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a proven and effective therapy for the management of the motor symptoms of Parkinson's disease (PD). While accurate positioning of the stimulating electrode is critical for success of this therapy, precise identification of the STN based on imaging can be challenging. We developed a method to accurately visualize the STN on a standard clinical magnetic resonance imaging (MRI). The method incorporates a database of 7-Tesla (T) MRIs of PD patients together with machine-learning methods (hereafter 7 T-ML). To validate the clinical application accuracy of the 7 T-ML method by comparing it with identification of the STN based on intraoperative microelectrode recordings. Sixteen PD patients who underwent microelectrode-recordings guided STN DBS were included in this study (30 implanted leads and electrode trajectories). The length of the STN along the electrode trajectory and the position of its contacts to dorsal, inside, or ventral to the STN were compared using microelectrode-recordings and the 7 T-ML method computed based on the patient's clinical 3T MRI. All 30 electrode trajectories that intersected the STN based on microelectrode-recordings, also intersected it when visualized with the 7 T-ML method. STN trajectory average length was 6.2 ± 0.7 mm based on microelectrode recordings and 5.8 ± 0.9 mm for the 7 T-ML method. We observed a 93% agreement regarding contact location between the microelectrode-recordings and the 7 T-ML method. The 7 T-ML method is highly consistent with microelectrode-recordings data. This method provides a reliable and accurate patient-specific prediction for targeting the STN.

  10. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  11. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  12. Comparison of five methods for extraction of Legionella pneumophila from respiratory specimens.

    PubMed

    Wilson, Deborah; Yen-Lieberman, Belinda; Reischl, Udo; Warshawsky, Ilka; Procop, Gary W

    2004-12-01

    The efficiencies of five commercially available nucleic acid extraction methods were evaluated for the recovery of a standardized inoculum of Legionella pneumophila in respiratory specimens (sputum and bronchoalveolar lavage [BAL] specimens). The concentrations of Legionella DNA recovered from sputa with the automated MagNA Pure (526,200 CFU/ml) and NucliSens (171,800 CFU/ml) extractors were greater than those recovered with the manual methods (i.e., Roche High Pure kit [133,900 CFU/ml], QIAamp DNA Mini kit [46,380 CFU/ml], and ViralXpress kit [13,635 CFU/ml]). The rank order was the same for extracts from BAL specimens, except that for this specimen type the QIAamp DNA Mini kit recovered more than the Roche High Pure kit.

  13. Management of computed tomography-detected pneumothorax in patients with blunt trauma: experience from a community-based hospital

    PubMed Central

    Hefny, Ashraf F; Kunhivalappil, Fathima T; Matev, Nikolay; Avila, Norman A; Bashir, Masoud O; Abu-Zidan, Fikri M

    2018-01-01

    INTRODUCTION Diagnoses of pneumothorax, especially occult pneumothorax, have increased as the use of computed tomography (CT) for imaging trauma patients becomes near-routine. However, the need for chest tube insertion remains controversial. We aimed to study the management of pneumothorax detected on CT among patients with blunt trauma, including the decision for tube thoracostomy, in a community-based hospital. METHODS Chest CT scans of patients with blunt trauma treated at Al Rahba Hospital, Abu Dhabi, United Arab Emirates, from October 2010 to October 2014 were retrospectively studied. Variables studied included demography, mechanism of injury, endotracheal intubation, pneumothorax volume, chest tube insertion, Injury Severity Score, hospital length of stay and mortality. RESULTS CT was performed in 703 patients with blunt trauma. Overall, pneumothorax was detected on CT for 74 (10.5%) patients. Among the 65 patients for whom pneumothorax was detected before chest tube insertion, 25 (38.5%) needed chest tube insertion, while 40 (61.5%) did not. Backward stepwise likelihood regression showed that independent factors that significantly predicted chest tube insertion were endotracheal intubation (p = 0.01), non-United Arab Emirates nationality (p = 0.01) and pneumothorax volume (p = 0.03). The receiver operating characteristic curve showed that the best pneumothorax volume that predicted chest tube insertion was 30 mL. CONCLUSION Chest tube was inserted in less than half of the patients with blunt trauma for whom pneumothorax was detected on CT. Pneumothorax volume should be considered in decision-making regarding chest tube insertion. Conservative treatment may be sufficient for pneumothorax of volume < 30 mL. PMID:28741012

  14. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  15. Phylodynamic Analysis Revealed That Epidemic of CRF07_BC Strain in Men Who Have Sex with Men Drove Its Second Spreading Wave in China.

    PubMed

    Zhang, Min; Jia, Dijing; Li, Hanping; Gui, Tao; Jia, Lei; Wang, Xiaolin; Li, Tianyi; Liu, Yongjian; Bao, Zuoyi; Liu, Siyang; Zhuang, Daomin; Li, Jingyun; Li, Lin

    2017-10-01

    CRF07_BC was originally formed in Yunnan province of China in 1980s and spread quickly in injecting drug users (IDUs). In recent years, it has been introduced into men who have sex with men (MSM) and become the most dominant strain in China. In this study, we performed a comprehensively phylodynamic analysis of CRF07_BC sequences from China. All CRF07_BC sequences identified in China were retrieved from database. More sequences obtained in our laboratory were added to make the dataset more representative. A maximum-likelihood (ML) tree was constructed with PhyML3.0. Maximum clade credibility (MCC) tree and effective population size were predicted by using Markov Chains Monte Carlo sampling method with Beast software. A total of 610 CRF07_BC sequences coving 1,473 bp of the gag gene (from 817 to 2,289 according to HXB2 calculator) were included into the dataset. Three epidemic clusters were identified; two clusters comprised sequences from IDUs, while one cluster mainly contained sequences from MSMs. The time of the most recent common ancestor of clusters that composed of sequences from MSMs was estimated to be in 2000. Two rapid spreading waves of effective population size of CRF07_BC infections were identified in the skyline plot. The second wave coincided with the expanding of MSM cluster. The results indicated that the control of CRF07_BC infections in MSMs would help to decrease its epidemic in China.

  16. Formal Verification of Complex Systems based on SysML Functional Requirements

    DTIC Science & Technology

    2014-12-23

    Formal Verification of Complex Systems based on SysML Functional Requirements Hoda Mehrpouyan1, Irem Y. Tumer2, Chris Hoyle2, Dimitra Giannakopoulou3...requirements for design of complex engineered systems. The proposed ap- proach combines a SysML modeling approach to document and structure safety requirements...methods and tools to support the integration of safety into the design solution. 2.1. SysML for Complex Engineered Systems Traditional methods and tools

  17. Mitochondrial DNA variation and phylogenetic relationships among five tuna species based on sequencing of D-loop region.

    PubMed

    Kumar, Girish; Kocour, Martin; Kunal, Swaraj Priyaranjan

    2016-05-01

    In order to assess the DNA sequence variation and phylogenetic relationship among five tuna species (Auxis thazard, Euthynnus affinis, Katsuwonus pelamis, Thunnus tonggol, and T. albacares) out of all four tuna genera, partial sequences of the mitochondrial DNA (mtDNA) D-loop region were analyzed. The estimate of intra-specific sequence variation in studied species was low, ranging from 0.027 to 0.080 [Kimura's two parameter distance (K2P)], whereas values of inter-specific variation ranged from 0.049 to 0.491. The longtail tuna (T. tonggol) and yellowfin tuna (T. albacares) were found to share a close relationship (K2P = 0.049) while skipjack tuna (K. pelamis) was most divergent studied species. Phylogenetic analysis using Maximum-Likelihood (ML) and Neighbor-Joining (NJ) methods supported the monophyletic origin of Thunnus species. Similarly, phylogeny of Auxis and Euthynnus species substantiate the monophyly. However, results showed a distinct origin of K. pelamis from genus Thunnus as well as Auxis and Euthynnus. Thus, the mtDNA D-loop region sequence data supports the polyphyletic origin of tuna species.

  18. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  19. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  20. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  1. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  2. Challenges in Species Tree Estimation Under the Multispecies Coalescent Model

    PubMed Central

    Xu, Bo; Yang, Ziheng

    2016-01-01

    The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902

  3. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  4. A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins

    PubMed Central

    Knudsen, Bjarne; Miyamoto, Michael M.

    2001-01-01

    Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650

  5. Development and validation of simple spectrophotometric and chemometric methods for simultaneous determination of empagliflozin and metformin: Applied to recently approved pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Ayoub, Bassam M.

    2016-11-01

    New univariate spectrophotometric method and multivariate chemometric approach were developed and compared for simultaneous determination of empagliflozin and metformin manipulating their zero order absorption spectra with application on their pharmaceutical preparation. Sample enrichment technique was used to increase concentration of empagliflozin after extraction from tablets to allow its simultaneous determination with metformin without prior separation. Validation parameters according to ICH guidelines were satisfactory over the concentration range of 2-12 μg mL- 1 for both drugs using simultaneous equation with LOD values equal to 0.20 μg mL- 1 and 0.19 μg mL- 1, LOQ values equal to 0.59 μg mL- 1 and 0.58 μg mL- 1 for empagliflozin and metformin, respectively. While the optimum results for the chemometric approach using partial least squares method (PLS-2) were obtained using concentration range of 2-10 μg mL- 1. The optimized validated methods are suitable for quality control laboratories enable fast and economic determination of the recently approved pharmaceutical combination Synjardy® tablets.

  6. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  7. Derivative spectrophotometric method for simultaneous determination of zofenopril and fluvastatin in mixtures and pharmaceutical dosage forms.

    PubMed

    Stolarczyk, Mariusz; Maślanka, Anna; Apola, Anna; Rybak, Wojciech; Krzek, Jan

    2015-09-05

    Fast, accurate and precise method for the determination of zofenopril and fluvastatin was developed using spectrophotometry of the first (D1), second (D2), and third (D3) order derivatives in two-component mixtures and in pharmaceutical preparations. It was shown, that the developed method allows for the determination of the tested components in a direct manner, despite the apparent interference of the absorption spectra in the UV range. For quantitative determinations, "zero-crossing" method was chosen, appropriate wavelengths for zofenopril were: D1 λ=270.85 nm, D2 λ=286.38 nm, D3 λ=253.90 nm. Fluvastatin was determined at wavelengths: D1 λ=339.03 nm, D2 λ=252.57 nm, D3 λ=258.50 nm, respectively. The method was characterized by high sensitivity and accuracy, for zofenopril LOD was in the range of 0.19-0.87 μg mL(-1), for fluvastatin 0.51-1.18 μg mL(-1), depending on the class of derivative, and for zofenopril and fluvastatin LOQ was 0.57-2.64 μg mL(-1) and 1.56-3.57 μg mL(-1), respectively. The recovery of individual components was within the range of 100±5%. For zofenopril, the linearity range was estimated between 7.65 μg mL(-1) and 22.94 μg mL(-1), and for fluvastatin between 5.60 μg mL(-1) and 28.00 μg mL(-1). Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Serum soluble CD30 in early arthritis: a sign of inflammation but not a predictor of outcome.

    PubMed

    Savolainen, E; Matinlauri, I; Kautiainen, H; Luosujärvi, R; Kaipiainen-Seppänen, O

    2008-01-01

    To evaluate serum soluble CD30 levels (sCD30) in an early arthritis series and assess their ability to predict the outcome in patients with rheumatoid arthritis (RA) and undifferentiated arthritis (UA) at one year follow-up. Serum sCD30 levels were measured by ELISA from 92 adult patients with RA and UA at baseline and from 60 adult controls. The patients were followed up for one year in the Kuopio 2000 Arthritis Survey. Receiver operating characteristic (ROC) curves were constructed to determine cut off points of sCD30 in RA and UA that select the inflammatory disease from controls. Sensitivity, specificity and positive likelihood ratio, and their 95 % CIs were calculated for sCD30 levels in RA and UA. Median serum sCD30 levels were higher in RA 25.1 (IQ range 16.3-38.6) IU/ml (p<0.001) and in UA 23.4 (15.4-35.6) IU/ml (p<0.001) than in controls 15.1 (10.7-20.8) IU/ml. No differences were recorded between RA and UA (p=0.840). Serum sCD30 levels at baseline did not predict remission at one year follow-up. Serum sCD30 levels were higher in RA and UA than in controls at baseline but they did not predict remission at one year follow-up in this series.

  9. Regulation of myocardial blood flow response to mental stress in healthy individuals.

    PubMed

    Schöder, H; Silverman, D H; Campisi, R; Sayre, J W; Phelps, M E; Schelbert, H R; Czernin, J

    2000-02-01

    Mental stress testing has been proposed as a noninvasive tool to evaluate endothelium-dependent coronary vasomotion. In patients with coronary artery disease, mental stress can induce myocardial ischemia. However, even the determinants of the physiological myocardial blood flow (MBF) response to mental stress are poorly understood. Twenty-four individuals (12 males/12 females, mean age 49 +/- 13 yr, range 31-74 yr) with a low likelihood for coronary artery disease were studied. Serum catecholamines, cardiac work, and MBF (measured quantitatively with N-13 ammonia and positron emission tomography) were assessed. During mental stress (arithmetic calculation) MBF increased significantly from 0.70 +/- 0.14 to 0.92 +/- 0.21 ml x min(-1) x g(-1) (P < 0.01). Mental stress caused significant increases (P < 0.01) in serum epinephrine (26 +/- 16 vs. 42 +/- 17 pg/ml), norepinephrine (272 +/- 139 vs. 322 +/- 136 pg/ml), and cardiac work [rate-pressure product (RPP) 8,011 +/- 1,884 vs. 10,416 +/- 2,711]. Stress-induced changes in cardiac work were correlated with changes in MBF (r = 0.72; P < 0.01). Multiple-regression analysis revealed stress-induced changes in the RPP as the only significant (P = 0.0001) predictor for the magnitude of mental stress-induced increases in MBF in healthy individuals. Data from this group of healthy individuals should prove useful to investigate coronary vasomotion in individuals at risk for or with documented coronary artery disease.

  10. Origin of the Eumetazoa: testing ecological predictions of molecular clocks against the Proterozoic fossil record

    NASA Technical Reports Server (NTRS)

    Peterson, Kevin J.; Butterfield, Nicholas J.

    2005-01-01

    Molecular clocks have the potential to shed light on the timing of early metazoan divergences, but differing algorithms and calibration points yield conspicuously discordant results. We argue here that competing molecular clock hypotheses should be testable in the fossil record, on the principle that fundamentally new grades of animal organization will have ecosystem-wide impacts. Using a set of seven nuclear-encoded protein sequences, we demonstrate the paraphyly of Porifera and calculate sponge/eumetazoan and cnidarian/bilaterian divergence times by using both distance [minimum evolution (ME)] and maximum likelihood (ML) molecular clocks; ME brackets the appearance of Eumetazoa between 634 and 604 Ma, whereas ML suggests it was between 867 and 748 Ma. Significantly, the ME, but not the ML, estimate is coincident with a major regime change in the Proterozoic acritarch record, including: (i) disappearance of low-diversity, evolutionarily static, pre-Ediacaran acanthomorphs; (ii) radiation of the high-diversity, short-lived Doushantuo-Pertatataka microbiota; and (iii) an order-of-magnitude increase in evolutionary turnover rate. We interpret this turnover as a consequence of the novel ecological challenges accompanying the evolution of the eumetazoan nervous system and gut. Thus, the more readily preserved microfossil record provides positive evidence for the absence of pre-Ediacaran eumetazoans and strongly supports the veracity, and therefore more general application, of the ME molecular clock.

  11. [Evaluation of in vitro antimicrobial activity of cefazolin alone and in combination with cefmetazole or flomoxef using agar dilution method and disk diffusion method].

    PubMed

    Matsuo, K; Uete, T

    1992-10-01

    Antimicrobial activities of cefazolin (CEZ) against 251 strains of various clinical isolates obtained during 1989 and 1990 were determined using the Mueller-Hinton agar dilution method at an inoculum level 10(6) CFU/ml. The reliability of the disk susceptility test was also studied using Mueller-Hinton agar and various disks at inoculum levels of 10(3-4) CFU/cm2 in estimating approximate values of MICs. In addition, antimicrobial activities of CEZ and cefmetazole (CMZ) or flomoxef (FMOX) in combination were investigated against methicillin-sensitive and -resistant Staphylococcus aureus (MSSA and MRSA) using the checkerboard agar dilution MIC method and the disk diffusion test either with the disks contained CEZ, CMZ, and FMOX alone, or CEZ, and CMZ or FMOX in combination. In this study, the MICs of CEZ against S. aureus were distributed with the 3 peak values at 0.39 microgram/ml, 3.13 micrograms/ml and > 100 micrograms/ml. MICs against MSSA were 0.39 microgram/ml to 0.78 microgram/ml, whereas those against MRSA were greater than 0.78 microgram/ml. MICs against majority of strains of Enterococcus faecalis were 25 micrograms/ml. Over 90% of strains of Escherichia coli and Klebsiella pneumoniae were inhibited at the level of 3.13 micrograms/ml. About 60% of isolates of indole negative Proteus spp. were inhibited at the levels of less than 3.13 micrograms/ml and 100% at 6.25 micrograms/ml, but MICs against indole positive Proteus spp., Serratia spp. and Pseudomonas aeruginosa were over 100 micrograms/ml. The antimicrobial activities of CEZ against these clinical isolates were not significantly different compared to those reported about 15-20 years ago, except for S. aureus. Highly resistant strains of S. aureus to CEZ were more prevalent in this study. The inhibitory zones obtained with the disk test were compared with MICs. The results of CEZ disk susceptibility test with 30 micrograms disk (Showa) or 10 micrograms disk (prepared in this laboratory) were well correlated with MICs (r = -0.837 and -0.814, respectively), showing the reliavility of the disk method in estimating approximate values of MICs. In the 4 category classification system currently used in Japan, break points in MIC values proposed are () MIC < or = 3 micrograms/ml, (++) > 3-15 micrograms/ml, (+) > 15-60 micrograms/ml, (-) > 60 micrograms/ml. The results obtained with 30 micrograms disks showed false positive in 7.7% and false negative in 6.8% of the samples. The disk results with E. faecalis showed a higher ratio of false positive results.(ABSTRACT TRUNCATED AT 400 WORDS)

  12. Immunocytometric quantitation of foeto-maternal haemorrhage with the Abbott Cell-Dyn CD4000 haematology analyser.

    PubMed

    Little, B H; Robson, R; Roemer, B; Scott, C S

    2005-02-01

    This study evaluated the extended use of a haematology analyser (Abbott Cell-Dyn CD4000) for the immunofluorescent enumeration of foeto-maternal haemorrhage (FMH) with fluorescein isothiocyanate-labelled monoclonal anti-RhD. Method performance was assessed with artificial FMH standards, and a series of 44 clinical samples. Within run precision was <15% (coefficient of variation, CV) for FMH volumes of 3 ml and above, 18.8% at an FMH volume of 2 ml and 31.7% at an FMH volume of 1 ml. Linearity analysis showed excellent agreement (observed FMH% = 0.98x expected FMH% + 0.02), and a close relationship (R(2) = 0.99) between observed and expected FMH percentages. The lower limit of quantification of the CD4000 (SRP-Ret) method with a maximum CV of 15% was 1.6 ml, and the limit of detection was <1 ml. Parallel Kleihauer-Betke test (KBT) assessments of FMH standards showed an overall trend for higher KBT values (observed = 1.25x expected - 0.38). At an FMH level of 4 ml, KBT observer estimates ranged from 0.57 to 11.94 ml with a mean inter-observer CV of 63%. For 44 clinical samples, there was decision point agreement between KBT and SRP-Ret results for 42 samples with an FMH of <2 ml. Analysis in the low FMH range (<1 ml) showed that small volume foetal leaks could be detected with the SRP-Ret method in most of 23 samples with negative KBT results. CD4000 SRP-Ret method performance for FMH determination was similar to that reported for flow cytometry.

  13. Multi-scale landscape factors influencing stream water quality in the state of Oregon.

    PubMed

    Nash, Maliha S; Heggem, Daniel T; Ebert, Donald; Wade, Timothy G; Hall, Robert K

    2009-09-01

    Enterococci bacteria are used to indicate the presence of human and/or animal fecal materials in surface water. In addition to human influences on the quality of surface water, a cattle grazing is a widespread and persistent ecological stressor in the Western United States. Cattle may affect surface water quality directly by depositing nutrients and bacteria, and indirectly by damaging stream banks or removing vegetation cover, which may lead to increased sediment loads. This study used the State of Oregon surface water data to determine the likelihood of animal pathogen presence using enterococci and analyzed the spatial distribution and relationship of biotic (enterococci) and abiotic (nitrogen and phosphorous) surface water constituents to landscape metrics and others (e.g. human use, percent riparian cover, natural covers, grazing, etc.). We used a grazing potential index (GPI) based on proximity to water, land ownership and forage availability. Mean and variability of GPI, forage availability, stream density and length, and landscape metrics were related to enterococci and many forms of nitrogen and phosphorous in standard and logistic regression models. The GPI did not have a significant role in the models, but forage related variables had significant contribution. Urban land use within stream reach was the main driving factor when exceeding the threshold (> or =35 cfu/100 ml), agriculture was the driving force in elevating enterococci in sites where enterococci concentration was <35 cfu/100 ml. Landscape metrics related to amount of agriculture, wetlands and urban all contributed to increasing nutrients in surface water but at different scales. The probability of having sites with concentrations of enterococci above the threshold was much lower in areas of natural land cover and much higher in areas with higher urban land use within 60 m of stream. A 1% increase in natural land cover was associated with a 12% decrease in the predicted odds of having a site exceeding the threshold. Opposite to natural land cover, a one unit change in each of manmade barren and urban land use led to an increase of the likelihood of exceeding the threshold by 73%, and 11%, respectively. Change in urban land use had a higher influence on the likelihood of a site exceeding the threshold than that of natural land cover.

  14. Uncertainty associated with assessing semen volume: are volumetric and gravimetric methods that different?

    PubMed

    Woodward, Bryan; Gossen, Nicole; Meadows, Jessica; Tomlinson, Mathew

    2016-12-01

    The World Health Organization laboratory manual for the examination of human semen suggests that an indirect measurement of semen volume by weighing (gravimetric method) is more accurate than a direct measure using a serological pipette. A series of experiments were performed to determine the level of discrepancy between the two methods using pipettes and a balance which had been calibrated to a traceable standard. The median weights of 1.0ml and 5.0ml of semen were 1.03 g (range 1.02-1.05 g) and 5.11 g (range 4.95-5.16 g), respectively, suggesting a density for semen between 1.03g and 1.04 g/ml. When the containers were re-weighed after the removal of 5.0 ml semen using a serological pipette, the mean residual loss was 0.12 ml (120 μl) or 0.12 g (median 100 μl, range 70-300 μl). Direct comparison of the volumetric and gravimetric methods in a total of 40 samples showed a mean difference of 0.25ml (median 0.32 ± 0.67ml) representing an error of 8.5%. Residual semen left in the container by weight was on average 0.11 g (median 0.10 g, range 0.05-0.19 g). Assuming a density of 1 g/ml then the average error between volumetric and gravimetric methods was approximately 8% (p < 0.001). If, however, the WHO value for density is assumed (1.04 g/ml) then the difference is reduced to 4.2%. At least 2.4-3.5% of this difference is also explained by the residual semen remaining in the container. This study suggests that by assuming the density of semen as 1 g/ml, there is significant uncertainty associated with the average gravimetric measurement of semen volume. Laboratories may therefore prefer to provide in-house quality assurance data in order to be satisfied that 'estimating' semen volume is 'fit for purpose' as opposed to assuming a lower uncertainty associated with the WHO recommended method.

  15. www.common-metrics.org: a web application to estimate scores from different patient-reported outcome measures on a common scale.

    PubMed

    Fischer, H Felix; Rose, Matthias

    2016-10-19

    Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.

  16. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  17. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  18. Dimensionality of the 9-item Utrecht Work Engagement Scale revisited: A Bayesian structural equation modeling approach.

    PubMed

    Fong, Ted C T; Ho, Rainbow T H

    2015-01-01

    The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.

  19. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    PubMed

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  20. Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun

    2016-04-18

    In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.

  1. Likelihoods for fixed rank nomination networks

    PubMed Central

    HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE

    2014-01-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  2. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  3. BindML/BindML+: Detecting Protein-Protein Interaction Interface Propensity from Amino Acid Substitution Patterns.

    PubMed

    Wei, Qing; La, David; Kihara, Daisuke

    2017-01-01

    Prediction of protein-protein interaction sites in a protein structure provides important information for elucidating the mechanism of protein function and can also be useful in guiding a modeling or design procedures of protein complex structures. Since prediction methods essentially assess the propensity of amino acids that are likely to be part of a protein docking interface, they can help in designing protein-protein interactions. Here, we introduce BindML and BindML+ protein-protein interaction sites prediction methods. BindML predicts protein-protein interaction sites by identifying mutation patterns found in known protein-protein complexes using phylogenetic substitution models. BindML+ is an extension of BindML for distinguishing permanent and transient types of protein-protein interaction sites. We developed an interactive web-server that provides a convenient interface to assist in structural visualization of protein-protein interactions site predictions. The input data for the web-server are a tertiary structure of interest. BindML and BindML+ are available at http://kiharalab.org/bindml/ and http://kiharalab.org/bindml/plus/ .

  4. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  5. Simultaneous Estimation of Amlodipine Besilate and Olmesartan Medoxomil in Pharmaceutical Dosage Form

    PubMed Central

    Wankhede, S. B.; Wadkar, S. B.; Raka, K. C.; Chitlange, S. S.

    2009-01-01

    Two UV Spectrophotometric and one reverse phase high performance liquid chromatography methods have been developed for the simultaneous estimation of amlodipine besilate and olmesartan medoxomil in tablet dosage form. First UV spectrophotometric method was a determination using the simultaneous equation method at 237.5 nm and 255.5 nm over the concentration range 10-50 μg/ml and 10-50 μg/ml, for amlodipine besilate and olmesartan medoxomil with accuracy 100.09%, and 100.22% respectively. Second UV spectrophotometric method was a determination using the area under curve method at 242.5-232.5 nm and 260.5-250.5 nm over the concentration range of 10-50 μg/ml and 10-50 μg/ml, for amlodipine besilate and olmesartan medoxomil with accuracy 100.10%, and 100.48%, respectively. In reverse phase high performance liquid chromatography analysis carried out using 0.05M potassuim dihydrogen phosphate buffer:acetonitrile (50:50 v/v) as the mobile phase and Kromasil C18 (4.6 mm i.d.×250 mm) column as the stationery phase with detection wavelength of 238 nm. Flow rate was 1.0 ml/min. Retention time for amlodipine besilate and olmesartan medoxomil were 3.69 and 5.36 min, respectively. Linearity was obtained in the concentration range of 4-20 μg/ml and 10-50 μg/ml for amlodipine besilate and olmesartan medoxomil, respectively. Proposed methods can be used for the estimation of amlodipine besilate and olmesartan medoxomil in tablet dosage form provided all the validation parameters are met. PMID:20502580

  6. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  7. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  8. A Well-Resolved Phylogeny of the Trees of Puerto Rico Based on DNA Barcode Sequence Data

    PubMed Central

    Muscarella, Robert; Uriarte, María; Erickson, David L.; Swenson, Nathan G.; Zimmerman, Jess K.; Kress, W. John

    2014-01-01

    Background The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i) low terminal phylogenetic resolution and (ii) arbitrarily defined species pools. Methodology/principal findings We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA) to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%). We used a maximum likelihood (ML) approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%). We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types) changed some of our results depending on which phylogeny (ML vs. Phylomatic) was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny. Conclusions/significance With the DNA barcode phylogeny presented here (based on an island-wide species pool), we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i) facilitate stronger inferences about the role of historical processes in governing the assembly and composition of Puerto Rican forests, (ii) provide insight into Caribbean biogeography, and (iii) aid in incorporating evolutionary history into conservation planning. PMID:25386879

  9. A 3D Voronoi+Gapper Galaxy Cluster Finder in Redshift Space to z ∼ 0.2 I: an Algorithm Optimized for the 2dFGRS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pereira, Sebastián; Campusano, Luis E.; Hitschfeld-Kahler, Nancy

    This paper is the first in a series, presenting a new galaxy cluster finder based on a three-dimensional Voronoi Tesselation plus a maximum likelihood estimator, followed by gapping-filtering in radial velocity(VoML+G). The scientific aim of the series is a reassessment of the diversity of optical clusters in the local universe. A mock galaxy database mimicking the southern strip of the magnitude(blue)-limited 2dF Galaxy Redshift Survey (2dFGRS), for the redshift range 0.009 < z < 0.22, is built on the basis of the Millennium Simulation of the LCDM cosmology and a reference catalog of “Millennium clusters,” spannning across the 1.0 ×more » 10{sup 12}–1.0 × 10{sup 15} M {sub ⊙} h {sup −1} dark matter (DM) halo mass range, is recorded. The validation of VoML+G is performed through its application to the mock data and the ensuing determination of the completeness and purity of the cluster detections by comparison with the reference catalog. The execution of VoML+G over the 2dFGRS mock data identified 1614 clusters, 22% with N {sub g} ≥ 10, 64 percent with 10 > N {sub g} ≥ 5, and 14% with N {sub g} < 5. The ensemble of VoML+G clusters has a ∼59% completeness and a ∼66% purity, whereas the subsample with N {sub g} ≥ 10, to z ∼ 0.14, has greatly improved mean rates of ∼75% and ∼90%, respectively. The VoML+G cluster velocity dispersions are found to be compatible with those corresponding to “Millennium clusters” over the 300–1000 km s{sup −1} interval, i.e., for cluster halo masses in excess of ∼3.0 × 10{sup 13} M {sub ⊙} h {sup −1}.« less

  10. A well-resolved phylogeny of the trees of Puerto Rico based on DNA barcode sequence data.

    PubMed

    Muscarella, Robert; Uriarte, María; Erickson, David L; Swenson, Nathan G; Zimmerman, Jess K; Kress, W John

    2014-01-01

    The use of phylogenetic information in community ecology and conservation has grown in recent years. Two key issues for community phylogenetics studies, however, are (i) low terminal phylogenetic resolution and (ii) arbitrarily defined species pools. We used three DNA barcodes (plastid DNA regions rbcL, matK, and trnH-psbA) to infer a phylogeny for 527 native and naturalized trees of Puerto Rico, representing the vast majority of the entire tree flora of the island (89%). We used a maximum likelihood (ML) approach with and without a constraint tree that enforced monophyly of recognized plant orders. Based on 50% consensus trees, the ML analyses improved phylogenetic resolution relative to a comparable phylogeny generated with Phylomatic (proportion of internal nodes resolved: constrained ML = 74%, unconstrained ML = 68%, Phylomatic = 52%). We quantified the phylogenetic composition of 15 protected forests in Puerto Rico using the constrained ML and Phylomatic phylogenies. We found some evidence that tree communities in areas of high water stress were relatively phylogenetically clustered. Reducing the scale at which the species pool was defined (from island to soil types) changed some of our results depending on which phylogeny (ML vs. Phylomatic) was used. Overall, the increased terminal resolution provided by the ML phylogeny revealed additional patterns that were not observed with a less-resolved phylogeny. With the DNA barcode phylogeny presented here (based on an island-wide species pool), we show that a more fully resolved phylogeny increases power to detect nonrandom patterns of community composition in several Puerto Rican tree communities. Especially if combined with additional information on species functional traits and geographic distributions, this phylogeny will (i) facilitate stronger inferences about the role of historical processes in governing the assembly and composition of Puerto Rican forests, (ii) provide insight into Caribbean biogeography, and (iii) aid in incorporating evolutionary history into conservation planning.

  11. ML 3.0 smoothed aggregation user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-05-01

    ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less

  12. ML 3.1 smoothed aggregation user's guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sala, Marzio; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2004-10-01

    ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package ormore » to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.« less

  13. Simultaneous measurement of chlorophyll and astaxanthin in Haematococcus pluvialis cells by first-order derivative ultraviolet-visible spectrophotometry.

    PubMed

    Lababpour, Abdolmajid; Lee, Choul-Gyun

    2006-02-01

    A first-order derivative spectrophotometric method has been developed for the simultaneous measurement of chlorophyll and astaxanthin concentrations in Haematococcus pluvialis cells. Acetone was selected for the extraction of pigments because of its good sensitivity and low toxicity compared with other organic solvents tested; the tested solvents included acetone, methanol, hexane, chloroform, n-propanol, and acetonitrile. A first-order derivative spectrophotometric method was used to eliminate the effects of the overlaping of the chlorophyll and astaxanthin peaks. The linear ranges in 1D evaluation were from 0.50 to 20.0 microg x ml(-1) for chlorophyll and from 1.00 to 12.0 microg x ml(-1) for astaxanthin. The limits of detection of the analytical procedure were found to be 0.35 microg x ml(-1) for chlorophyll and 0.25 microg x ml(-1) for astaxanthin. The relative standard deviations for the determination of 7.0 microg x ml(-1) chlorophyll and 5.0 microg x ml(-1) astaxanthin were 1.2% and 1.1%, respectively. The procedure was found to be simple, rapid, and reliable. This method was successfully applied to the determination of chlorophyll and astaxanthin concentrations in H. pluvialis cells. A good agreement was achieved between the results obtained by the proposed method and HPLC method.

  14. Determination of bisphenol A in human serum by high-performance liquid chromatography with multi-electrode electrochemical detection.

    PubMed

    Inoue, K; Kato, K; Yoshimura, Y; Makino, T; Nakazawa, H

    2000-11-10

    A simple and sensitive method using high-performance liquid chromatography with multi-electrode electrochemical detection (HPLC-ED) including a coulometric array of four electrochemical sensors has been developed for the determination of bisphenol A in water and human serum. For good separation and detection of bisphenol A, a CAPCELL PAK UG 120 C18 reversed-phase column and a mobile phase consisting of 0.3% phosphoric acid-acetonitrile (60:40) were used. The detection limit obtained by the HPLC-ED method was 0.01 ng/ml (0.5 pg), which was more than 3000-times higher than the detection limit obtained by the ultraviolet (UV) method, and more than 200-times higher than the detection limit obtained by the fluorescence (FL) method. Bisphenol A in water and serum samples was pretreated by solid-phase extraction (SPE) after removing possible contamination derived from a plastic SPE cartridges and water used for the pretreatment. A trace amount (ND approximately 0.013 ng/ml) of bisphenol A was detected from the parts of cartridges (filtration column, sorbent bed and frits) by extraction with methanol, and it was completely removed by washing with at least 15 ml of methanol in the operation process. The concentrations of bisphenol A in tap water and Milli-Q-purified water were found to be 0.01 and 0.02 ng/ml, respectively. For that reason, bisphenol A-free water was made to trap bisphenol A in water using an Empore disk. In every pretreatment, SPE methods using bisphenol A-free water and washing with 15 ml of methanol were done in water and serum samples. The yields obtained from the recovery tests using water to which 0.5 or 0.05 ng/ml of bisphenol A was added were 83.8 to 98.2%, and the RSDs were 3.4 to 6.1%, respectively. The yields obtained from the recovery tests by OASIS HLB using serum to which 1.0 ng/ml or 0.1 ng/ml of bisphenol A was added were 79.0% and 87.3%, and the RSDs were 5.1% and 13.5%, respectively. The limits of quantification in water and serum sample were 0.01 ng/ml and 0.05 ng/ml, respectively. The method was applied to the determination of bisphenol A in healthy human serum sample, and the obtained detection was 0.32 ng/ml. From these results, the HPLC-ED method should be the most useful in the determination of bisphenol A at low concentration levels in water and biological samples.

  15. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  16. A novel method for blood volume estimation using trivalent chromium in rabbit models.

    PubMed

    Baby, Prathap Moothamadathil; Kumar, Pramod; Kumar, Rajesh; Jacob, Sanu S; Rawat, Dinesh; Binu, V S; Karun, Kalesh M

    2014-05-01

    Blood volume measurement though important in management of critically ill-patients is not routinely estimated in clinical practice owing to labour intensive, intricate and time consuming nature of existing methods. The aim was to compare blood volume estimations using trivalent chromium [(51)Cr(III)] and standard Evans blue dye (EBD) method in New Zealand white rabbit models and establish correction-factor (CF). Blood volume estimation in 33 rabbits was carried out using EBD method and concentration determined using spectrophotometric assay followed by blood volume estimation using direct injection of (51)Cr(III). Twenty out of 33 rabbits were used to find CF by dividing blood volume estimation using EBD with blood volume estimation using (51)Cr(III). CF is validated in 13 rabbits by multiplying it with blood volume estimation values obtained using (51)Cr(III). The mean circulating blood volume of 33 rabbits using EBD was 142.02 ± 22.77 ml or 65.76 ± 9.31 ml/kg and using (51)Cr(III) was estimated to be 195.66 ± 47.30 ml or 89.81 ± 17.88 ml/kg. The CF was found to be 0.77. The mean blood volume of 13 rabbits measured using EBD was 139.54 ± 27.19 ml or 66.33 ± 8.26 ml/kg and using (51)Cr(III) with CF was 152.73 ± 46.25 ml or 71.87 ± 13.81 ml/kg (P = 0.11). The estimation of blood volume using (51)Cr(III) was comparable to standard EBD method using CF. With further research in this direction, we envisage human blood volume estimation using (51)Cr(III) to find its application in acute clinical settings.

  17. Multiseed liposomal drug delivery system using micelle gradient as driving force to improve amphiphilic drug retention and its anti-tumor efficacy.

    PubMed

    Zhang, Wenli; Li, Caibin; Jin, Ya; Liu, Xinyue; Wang, Zhiyu; Shaw, John P; Baguley, Bruce C; Wu, Zimei; Liu, Jianping

    2018-11-01

    To improve drug retention in carriers for amphiphilic asulacrine (ASL), a novel active loading method using micelle gradient was developed to fabricate the ASL-loaded multiseed liposomes (ASL-ML). The empty ML were prepared by hydrating a thin film with empty micelles. Then the micelles in liposomal compartment acting as 'micelle pool' drove the drug to be loaded after the outer micelles were removed. Some reasoning studies including critical micelle concentration (CMC) determination, influencing factors tests on entrapment efficiency (EE), structure visualization, and drug release were carried out to explore the mechanism of active loading, ASL location, and the structure of ASL-ML. Comparisons were made between pre-loading and active loading method. Finally, the extended drug retention capacity of ML was evaluated through pharmacokinetic, drug tissue irritancy, and in vivo anti-tumor activity studies. Comprehensive results from fluorescent and transmission electron microscope (TEM) observation, encapsulation efficiency (EE) comparison, and release studies demonstrated the formation of ML-shell structure for ASL-ML without inter-carrier fusion. The location of drug mainly in inner micelles as well as the superiority of post-loading to the pre-loading method , in which drug in micelles shifted onto the bilayer membrane was an additional positive of this delivery system. It was observed that the drug amphiphilicity and interaction of micelles with drug were the two prerequisites for this active loading method. The extended retention capacity of ML has been verified through the prolonged half-life, reduced paw-lick responses in rats, and enhanced tumor inhibition in model mice. In conclusion, ASL-ML prepared by active loading method can effectively load drug into micelles with expected structure and improve drug retention.

  18. [Cord blood procalcitonin in the assessment of early-onset neonatal sepsis].

    PubMed

    Oria de Rueda Salguero, Olivia; Beceiro Mosquera, José; Barrionuevo González, Marta; Ripalda Crespo, María Jesús; Olivas López de Soria, Cristina

    2017-08-01

    Early diagnosis of early-onset neonatal sepsis (EONS) is essential to reduce morbidity and mortality. Procalcitonin (PCT) in cord blood could provide a diagnosis of infected patients from birth. To study the usefulness and safety of a procedure for the evaluation of newborns at risk of EONS, based on the determination of PCT in cord blood. Neonates with infectious risk factors, born in our hospital from October 2013 to January 2015 were included. They were processed according to an algorithm based on the values of cord blood procalcitonin (< 0.6ng/ml versus ≥0.6ng/ml). They were later classified as proved infection, probable, or no infection. Of the 2,519 infants born in the study period, 136 met inclusion criteria. None of 120 cases with PCT<0.6ng/ml in cord blood developed EONS (100% negative predictive value). On the other hand, of the 16 cases with PCT ≥0.6ng/ml, 10 were proven or probably infected (62.5% positive predictive value). The sensitivity of the PCT against infection was 100%, with a specificity of 95.2% (area under the receiver operator curve 0.969). The incidence of infection in the study group was 7.4%, and 26.1% in cases with maternal chorioamnionitis. 21 newborn (15.4%) received antibiotic therapy. The studied protocol has shown to be effective and safe to differentiate between patients with increased risk of developing an EONS, in those where the diagnostic and therapeutic approach was more interventionist, versus those with less likelihood of sepsis, who would benefit from a more conservative management. Copyright © 2016 Asociación Española de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  20. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Improving Crotalidae polyvalent immune Fab reconstitution times.

    PubMed

    Quan, Asia N; Quan, Dan; Curry, Steven C

    2010-06-01

    Crotalidae polyvalent immune Fab (CroFab) is used to treat rattlesnake envenomations in the United States. Time to infusion may be a critical factor in the treatment of these bites. Per manufacturer's instructions, 10 mL of sterile water for injection (SWI) and hand swirling are recommended for reconstitution. We wondered whether completely filling vials with 25 mL of SWI would result in shorter reconstitution times than using 10-mL volumes and how hand mixing compared to mechanical agitation of vials or leaving vials undisturbed. Six sets of 5 vials were filled with either 10 mL or 25 mL. Three mixing techniques were used as follows: undisturbed; agitation with a mechanical agitator; and continuous hand rolling and inverting of vials. Dissolution was determined by observation and time to complete dissolution for each vial. Nonparametric 2-tailed P values were calculated. Filling vials completely with 25 mL resulted in quicker dissolution than using 10-mL volumes, regardless of mixing method (2-tailed P = .024). Mixing by hand was shorter than other methods (P < .001). Reconstitution with 25 mL and hand mixing resulted in the shortest dissolution times (median, 1.1 minutes; range, 0.9-1.3 minutes). This appeared clinically important because dissolution times using 10 mL and mechanical rocking of vials (median, 26.4 minutes) or leaving vials undisturbed (median, 33.6 minutes) was several-fold longer. Hand mixing after filling vials completely with 25 mL results in shorter dissolution times than using 10 mL or other methods of mixing and is recommended, especially when preparing initial doses of CroFab. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  2. Partial Nephrectomy Versus Radical Nephrectomy for Clinical T1b and T2 Renal Tumors: A Systematic Review and Meta-analysis of Comparative Studies.

    PubMed

    Mir, Maria Carmen; Derweesh, Ithaar; Porpiglia, Francesco; Zargar, Homayoun; Mottrie, Alexandre; Autorino, Riccardo

    2017-04-01

    Partial nephrectomy (PN) is the reference standard of management for a cT1a renal mass. However, its role in the management of larger tumors (cT1b and cT2) is still under scrutiny. To conduct a meta-analysis assessing functional, oncologic, and perioperative outcomes of PN and radical nephrectomy (RN) in the specific case of larger renal tumors (≥cT1b). The primary endpoint was an overall analysis of cT1b and cT2 masses. The secondary endpoint was a sensitivity analysis for cT2 only. A systematic literature review was performed up to December 2015 using multiple search engines to identify eligible comparative studies. A formal meta-analysis was performed for studies comparing PN to RN for both cT1b and cT2 tumors. In addition, a sensitivity analysis including the subgroup of studies comparing PN to RN for cT2 only was conducted. Pooled estimates were calculated using a fixed-effects model if no significant heterogeneity was identified; alternatively, a random-effects model was used when significant heterogeneity was detected. For continuous outcomes, the weighted mean difference (WMD) was used as summary measure. For binary variables, the odds ratio (OR) or risk ratio (RR) was calculated with 95% confidence interval (CI). Statistical analyses were performed using Review Manager 5 (Cochrane Collaboration, Oxford, UK). Overall, 21 case-control studies including 11204 patients (RN 8620; PN 2584) were deemed eligible and included in the analysis. Patients undergoing PN were younger (WMD -2.3 yr; p<0.001) and had smaller masses (WMD -0.65cm; p<0.001). Lower estimated blood loss was found for RN (WMD 102.6ml; p<0.001). There was a higher likelihood of postoperative complications for PN (RR 1.74, 95% CI 1.34-2.2; p<0.001). Pathology revealed a higher rate of malignant histology for the RN group (RR 0.97; p=0.02). PN was associated with better postoperative renal function, as shown by higher postoperative estimated glomerular filtration rate (eGFR; WMD 12.4ml/min; p<0.001), lower likelihood of postoperative onset of chronic kidney disease (RR 0.36; p<0.001), and lower decline in eGFR (WMD -8.6ml/min; p<0.001). The PN group had a lower likelihood of tumor recurrence (OR 0.6; p<0.001), cancer-specific mortality (OR 0.58; p=0.001), and all-cause mortality (OR 0.67; p=0.005). Four studies compared PN (n=212) to RN (n=1792) in the specific case of T2 tumors (>7cm). In this subset of patients, the estimated blood loss was higher for PN (WMD 107.6ml; p<0.001), as was the likelihood of complications (RR 2.0; p<0.001). Both the recurrence rate (RR 0.61; p=0.004) and cancer-specific mortality (RR 0.65; p=0.03) were lower for PN. PN is a viable treatment option for larger renal tumors, as it offers acceptable surgical morbidity, equivalent cancer control, and better preservation of renal function, with potential for better long-term survival. For T2 tumors, PN use should be more selective, and specific patient and tumor factors should be considered. Further investigation, ideally in a prospective randomized fashion, is warranted to better define the role of PN in this challenging clinical scenario. We performed a cumulative analysis of the literature to determine the best treatment option in cases of localized kidney tumor of higher clinical stage (T1b and T2, as based on preoperative imaging). Our findings suggest that removing only the tumor and saving the kidney might be an effective treatment modality in terms of cancer control, with the advantage of preserving the kidney function. However, a higher risk of perioperative complications should be taken into account when facing larger tumors (clinical stage T2) with kidney-sparing surgery. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  3. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  4. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  5. Evolution of Rhizaria: new insights from phylogenomic analysis of uncultivated protists.

    PubMed

    Burki, Fabien; Kudryavtsev, Alexander; Matz, Mikhail V; Aglyamova, Galina V; Bulman, Simon; Fiers, Mark; Keeling, Patrick J; Pawlowski, Jan

    2010-12-02

    Recent phylogenomic analyses have revolutionized our view of eukaryote evolution by revealing unexpected relationships between and within the eukaryotic supergroups. However, for several groups of uncultivable protists, only the ribosomal RNA genes and a handful of proteins are available, often leading to unresolved evolutionary relationships. A striking example concerns the supergroup Rhizaria, which comprises several groups of uncultivable free-living protists such as radiolarians, foraminiferans and gromiids, as well as the parasitic plasmodiophorids and haplosporids. Thus far, the relationships within this supergroup have been inferred almost exclusively from rRNA, actin, and polyubiquitin genes, and remain poorly resolved. To address this, we have generated large Expressed Sequence Tag (EST) datasets for 5 species of Rhizaria belonging to 3 important groups: Acantharea (Astrolonche sp., Phyllostaurus sp.), Phytomyxea (Spongospora subterranea, Plasmodiophora brassicae) and Gromiida (Gromia sphaerica). 167 genes were selected for phylogenetic analyses based on the representation of at least one rhizarian species for each gene. Concatenation of these genes produced a supermatrix composed of 36,735 amino acid positions, including 10 rhizarians, 9 stramenopiles, and 9 alveolates. Phylogenomic analyses of this large dataset revealed a strongly supported clade grouping Foraminifera and Acantharea. The position of this clade within Rhizaria was sensitive to the method employed and the taxon sampling: Maximum Likelihood (ML) and Bayesian analyses using empirical model of evolution favoured an early divergence, whereas the CAT model and ML analyses with fast-evolving sites or the foraminiferan species Reticulomyxa filosa removed suggested a derived position, closely related to Gromia and Phytomyxea. In contrast to what has been previously reported, our analyses also uncovered the presence of the rhizarian-specific polyubiquitin insertion in Acantharea. Finally, this work reveals another possible rhizarian signature in the 60S ribosomal protein L10a. Our study provides new insights into the evolution of Rhizaria based on phylogenomic analyses of ESTs from three groups of previously under-sampled protists. It was enabled through the application of a recently developed method of transcriptome analysis, requiring very small amount of starting material. Our study illustrates the potential of this method to elucidate the early evolution of eukaryotes by providing large amount of data for uncultivable free-living and parasitic protists.

  6. Prototype pre-clinical PET scanner with depth-of-interaction measurements using single-layer crystal array and single-ended readout

    NASA Astrophysics Data System (ADS)

    Lee, Min Sun; Kim, Kyeong Yun; Ko, Guen Bae; Lee, Jae Sung

    2017-05-01

    In this study, we developed a proof-of-concept prototype PET system using a pair of depth-of-interaction (DOI) PET detectors based on the proposed DOI-encoding method and digital silicon photomultiplier (dSiPM). Our novel cost-effective DOI measurement method is based on a triangular-shaped reflector that requires only a single-layer pixelated crystal and single-ended signal readout. The DOI detector consisted of an 18  ×  18 array of unpolished LYSO crystal (1.47  ×  1.47  ×  15 mm3) wrapped with triangular-shaped reflectors. The DOI information was encoded by depth-dependent light distribution tailored by the reflector geometry and DOI correction was performed using four-step depth calibration data and maximum-likelihood (ML) estimation. The detector pair and the object were placed on two motorized rotation stages to demonstrate 12-block ring PET geometry with 11.15 cm diameter. Spatial resolution was measured and phantom and animal imaging studies were performed to investigate imaging performance. All images were reconstructed with and without the DOI correction to examine the impact of our DOI measurement. The pair of dSiPM-based DOI PET detectors showed good physical performances respectively: 2.82 and 3.09 peak-to-valley ratios, 14.30% and 18.95% energy resolution, and 4.28 and 4.24 mm DOI resolution averaged over all crystals and all depths. A sub-millimeter spatial resolution was achieved at the center of the field of view (FOV). After applying ML-based DOI correction, maximum 36.92% improvement was achieved in the radial spatial resolution and a uniform resolution was observed within 5 cm of transverse PET FOV. We successfully acquired phantom and animal images with improved spatial resolution and contrast by using the DOI measurement. The proposed DOI-encoding method was successfully demonstrated in the system level and exhibited good performance, showing its feasibility for animal PET applications with high spatial resolution and sensitivity.

  7. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT

  8. Using phylogenetically-informed annotation (PIA) to search for light-interacting genes in transcriptomes from non-model organisms.

    PubMed

    Speiser, Daniel I; Pankey, M Sabrina; Zaharoff, Alexander K; Battelle, Barbara A; Bracken-Grissom, Heather D; Breinholt, Jesse W; Bybee, Seth M; Cronin, Thomas W; Garm, Anders; Lindgren, Annie R; Patel, Nipam H; Porter, Megan L; Protas, Meredith E; Rivera, Ajna S; Serb, Jeanne M; Zigler, Kirk S; Crandall, Keith A; Oakley, Todd H

    2014-11-19

    Tools for high throughput sequencing and de novo assembly make the analysis of transcriptomes (i.e. the suite of genes expressed in a tissue) feasible for almost any organism. Yet a challenge for biologists is that it can be difficult to assign identities to gene sequences, especially from non-model organisms. Phylogenetic analyses are one useful method for assigning identities to these sequences, but such methods tend to be time-consuming because of the need to re-calculate trees for every gene of interest and each time a new data set is analyzed. In response, we employed existing tools for phylogenetic analysis to produce a computationally efficient, tree-based approach for annotating transcriptomes or new genomes that we term Phylogenetically-Informed Annotation (PIA), which places uncharacterized genes into pre-calculated phylogenies of gene families. We generated maximum likelihood trees for 109 genes from a Light Interaction Toolkit (LIT), a collection of genes that underlie the function or development of light-interacting structures in metazoans. To do so, we searched protein sequences predicted from 29 fully-sequenced genomes and built trees using tools for phylogenetic analysis in the Osiris package of Galaxy (an open-source workflow management system). Next, to rapidly annotate transcriptomes from organisms that lack sequenced genomes, we repurposed a maximum likelihood-based Evolutionary Placement Algorithm (implemented in RAxML) to place sequences of potential LIT genes on to our pre-calculated gene trees. Finally, we implemented PIA in Galaxy and used it to search for LIT genes in 28 newly-sequenced transcriptomes from the light-interacting tissues of a range of cephalopod mollusks, arthropods, and cubozoan cnidarians. Our new trees for LIT genes are available on the Bitbucket public repository ( http://bitbucket.org/osiris_phylogenetics/pia/ ) and we demonstrate PIA on a publicly-accessible web server ( http://galaxy-dev.cnsi.ucsb.edu/pia/ ). Our new trees for LIT genes will be a valuable resource for researchers studying the evolution of eyes or other light-interacting structures. We also introduce PIA, a high throughput method for using phylogenetic relationships to identify LIT genes in transcriptomes from non-model organisms. With simple modifications, our methods may be used to search for different sets of genes or to annotate data sets from taxa outside of Metazoa.

  9. Spectrophotometric and HPLC determinations of anti-diabetic drugs, rosiglitazone maleate and metformin hydrochloride, in pure form and in pharmaceutical preparations.

    PubMed

    Onal, Armağan

    2009-12-01

    In this study, three spectrophotometric methods and one HPLC method were developed for analysis of anti-diabetic drugs in tablets. The two spectrophotometric methods were based on the reaction of rosiglitazone (RSG) with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) and bromocresol green (BCG). Linear relationship between the absorbance at lambda(max) and the drug concentration was found to be in the ranges 6.0-50.0 and 1.5-12 microg ml(-1) for DDQ and BCG methods, respectively. The third spectrophotometric method consists of a zero-crossing first-derivative spectrophotometric method for simultaneous analysis of RSG and metformin (MTF) in tablets. The calibration curves were linear within the concentration ranges of 5.0-50 microg ml(-1) for RSG and 1.0-10.0 microg ml(-1) for MTF. The fourth method is a rapid stability-indicating HPLC method developed for the determination of RSG. A linear response was observed within the concentration range of 0.25-2.5 microg ml(-1). The proposed methods have been successfully applied to the tablet analysis.

  10. Fluorimetric determination of some sulfur containing compounds through complex formation with terbium (Tb+3) and uranium (U+3).

    PubMed

    Taha, Elham Anwer; Hassan, Nagiba Yehya; Aal, Fahima Abdel; Fattah, Laila El-Sayed Abdel

    2007-05-01

    Two simple, sensitive and specific fluorimetric methods have been developed for the determination of some sulphur containing compounds namely, Acetylcysteine (Ac), Carbocisteine (Cc) and Thioctic acid (Th) using terbium Tb+3 and uranium U+3 ions as fluorescent probes. The proposed methods involve the formation of a ternary complex with Tb+3 in presence of Tris-buffer method (I) and a binary complex with aqueous uranyl acetate solution method (II). The fluorescence quenching of Tb+3 at 510, 488 and 540 nm (lambda(ex) 250, 241 and 268 nm) and of uranyl acetate at 512 nm (lambda(ex) 240 nm) due to the complex formation was quantitatively measured for Ac, Cc and Th, respectively. The reaction conditions and the fluorescence spectral properties of the complexes have been investigated. Under the described conditions, the proposed methods were applicable over the concentration range (0.2-2.5 microg ml(-1)), (1-4 microg ml(-1)) and (0.5-3.5 microg ml(-1)) with mean percentage recoveries 99.74+/-0.36, 99.70+/-0.52 and 99.43+/-0.23 for method (I) and (0.5-6 microg ml(-1)), (0.5-5 microg ml(-1)), and (1-6 microg ml(-1)) with mean percentage recoveries 99.38+/-0.20, 99.82+/-0.28 and 99.93+/-0.32 for method (II), for the three cited drugs, respectively. The proposed methods were successfully applied for the determination of the studied compounds in bulk powders and in pharmaceutical formulations, as well as in presence of their related substances. The results obtained were found to be in agree statistically with those obtained by official and reported ones. The two methods were validated according to USP guidelines and also assessed by applying the standard addition technique.

  11. Survival Data and Regression Models

    NASA Astrophysics Data System (ADS)

    Grégoire, G.

    2014-12-01

    We start this chapter by introducing some basic elements for the analysis of censored survival data. Then we focus on right censored data and develop two types of regression models. The first one concerns the so-called accelerated failure time models (AFT), which are parametric models where a function of a parameter depends linearly on the covariables. The second one is a semiparametric model, where the covariables enter in a multiplicative form in the expression of the hazard rate function. The main statistical tool for analysing these regression models is the maximum likelihood methodology and, in spite we recall some essential results about the ML theory, we refer to the chapter "Logistic Regression" for a more detailed presentation.

  12. Does polyploidy facilitate long-distance dispersal?

    PubMed Central

    Linder, H. Peter; Barker, Nigel P.

    2014-01-01

    Background and Aims The ability of plant lineages to reach all continents contributes substantially to their evolutionary success. This is exemplified by the Poaceae, one of the most successful angiosperm families, in which most higher taxa (tribes, subfamilies) have global distributions. Due to the old age of the ocean basins relative to the major angiosperm radiations, this is only possible by means of long-distance dispersal (LDD), yet the attributes of lineages with successful LDD remain obscure. Polyploid species are over-represented in invasive floras and in the previously glaciated Arctic regions, and often have wider ecological tolerances than diploids; thus polyploidy is a candidate attribute of successful LDD. Methods The link between polyploidy and LDD was explored in the globally distributed grass subfamily Danthonioideae. An almost completely sampled and well-resolved species-level phylogeny of the danthonioids was used, and the available cytological information was assembled. The cytological evolution in the clade was inferred using maximum likelihood (ML) as implemented in ChromEvol. The biogeographical evolution in the clade was reconstructed using ML and Bayesian approaches. Key Results Numerous increases in ploidy level are demonstrated. A Late Miocene–Pliocene cycle of polyploidy is associated with LDD, and in two cases (the Australian Rytidosperma and the American Danthonia) led to secondary polyploidy. While it is demonstrated that successful LDD is more likely in polyploid than in diploid lineages, a link between polyploidization events and LDD is not demonstrated. Conclusions The results suggest that polyploids are more successful at LDD than diploids, and that the frequent polyploidy in the grasses might have facilitated the extensive dispersal among continents in the family, thus contributing to their evolutionary success. PMID:24694830

  13. Serum and biliary MMP-9 and TIMP-1 concentrations in the diagnosis of cholangiocarcinoma

    PubMed Central

    İnce, Ali Tüzün; Yıldız, Kemal; Gangarapu, Venkatanarayana; Kayar, Yusuf; Baysal, Birol; Karatepe, Oğuzhan; Kemik, Ahu Sarbay; Şentürk, Hakan

    2015-01-01

    Aim: Cholangiocarcinoma is generally detected late in the course of disease, and current diagnostic techniques often fail to differentiate benign from malignant disease. Ongoing biomarker studies for early diagnosis of cholangiocarcinoma are still continues. By this study, we analyzed the roles of serum and biliary MMP-9 and TIMP-1 concentrations in the diagnosis of cholangiocarcinoma. Materials and methods: The 113 patients (55 males, 58 females) were included; 33 diagnosed with cholangiocarcinoma (malignant group) and 80 diagnosed with choledocholithiasis (benign group). MMP-9 and TIMP-1 concentrations were analyzed in serum and bile and compared in the malignant and benign groups. Results were evaluated statistically. Results: Biliary MMP-9 concentrations were significantly higher (576 ± 209 vs. 403 ± 140 ng/ml, p < 0.01) and biliary TIMP-1 concentrations were significantly lower (22.4 ± 4.9 vs. 29.4 ± 6.1 ng/ml, p < 0.01) in the malignant than in the benign group. In contrast, serum MMP-9 and TIMP-1 concentrations were similar in the two groups. Receiver operating curve analysis revealed that the areas under the curve of bile MMP-9 and TIMP-1 were significantly higher than 0.5 (p < 0.001). The sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios and accuracy were 0.94, 0.32, 0.36, 0.93, 1.40, 0.19 and 0.5 for biliary MMP-9, respectively, and 0.97, 0.36, 0.39, 0.97, 1.5, 0.08 and 0.54 for biliary TIMP-1, respectively. Conclusion: Serum and biliary MMP-9 and TIMP-1 tests do not appear to be useful in the diagnosis of cholangiocarcinoma. PMID:25932227

  14. Physalis and physaloids: A recent and complex evolutionary history.

    PubMed

    Zamora-Tavares, María Del Pilar; Martínez, Mahinda; Magallón, Susana; Guzmán-Dávalos, Laura; Vargas-Ponce, Ofelia

    2016-07-01

    The complex evolutionary history of the subtribe Physalinae is reflected in the poor resolution of the relationships of Physalis and the physaloid genera. We hypothesize that this low resolution is caused by recent evolutionary history in a complex geographic setting. The aims of this study were twofold: (1) To determine the phylogenetic relationships of the current genera recognized in Physalinae in order to identify monophyletic groups and resolve the physaloid grade; and (2) to determine the probable causes of the recent divergence in Physalinae. We conducted phylogenetic analyses with maximum likelihood (ML) and Bayesian inference with 50 Physalinae species and 19 others as outgroups, using morphological and molecular data from five plastid and two nuclear regions. A relaxed molecular clock was obtained from the ML topology and ancestral area reconstruction was conducted using the DEC model. The genera Chamaesaracha, Leucophysalis, and Physalis subgenus Rydbergis were recovered as monophyletic. Three clades, Alkekengi-Calliphysalis, Schraderanthus-Tzeltalia, and Witheringia-Brachistus, also received good support. However, even with morphological data and that of the DNA of seven regions, the tree was not completely resolved and many clades remained unsupported. Physalinae diverged at the end of the Miocene (∼9.22Mya) with one trend indicating that the greatest diversification within the subtribe occurred during the last 5My. The Neotropical region presented the highest probability (45%) of being the ancestral area of Physalinae followed by the Mexican Transition Zone (35%). During the Pliocene and Pleistocene, the geographical areas where species were found experienced significant geological and climatic changes, giving rise to rapid and relatively recent diversification events in Physalinae. Thus, recent origin, high diversification, and morphological complexity have contributed, at least with the currently available methods, to the inability to completely disentangle the phylogenetic relationships of Physalinae. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. One-year monthly quantitative survey of noroviruses, enteroviruses, and adenoviruses in wastewater collected from six plants in Japan.

    PubMed

    Katayama, Hiroyuki; Haramoto, Eiji; Oguma, Kumiko; Yamashita, Hiromasa; Tajima, Atsushi; Nakajima, Hideichiro; Ohgaki, Shinichiro

    2008-03-01

    Sewerage systems are important nodes to monitor human enteric pathogens transmitted via water. A quantitative virus survey was performed once a month for a year to understand the seasonal profiles of noroviruses genotype 1 and genotype 2, enteroviruses, and adenoviruses in sewerage systems. A total of 72 samples of influent, secondary-treated wastewater before chlorination and effluent were collected from six wastewater treatment plants in Japan. Viruses were successfully recovered from 100ml of influent and 1000ml of the secondary-treated wastewater and effluent using the acid rinse method. Viruses were determined by the RT-PCR or PCR method to obtain the most probable number for each sample. All the samples were also assayed for fecal coliforms (FCs) by a double-layer method. The seasonal profiles of noroviruses genotype 1 and genotype 2 in influent were very similar, i.e. they were abundant in winter (from November to March) at a geometric mean value of 190 and 200 RT-PCR units/ml, respectively, and less frequent in summer (from June to September), at 4.9 and 9.1 RT-PCR units/ml, respectively. The concentrations of enteroviruses and adenoviruses were mostly constant all the year round, 17 RT-PCR units/ml and 320 PCR units/ml in influent, and 0.044 RT-PCR units/ml and 7.0 PCR units/ml in effluent, respectively.

  16. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  17. Generalizing Terwilliger's likelihood approach: a new score statistic to test for genetic association.

    PubMed

    el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J

    2007-09-24

    In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.

  18. Anemia, transfusion, and phlebotomy practices in critically ill patients with prolonged ICU length of stay: a cohort study

    PubMed Central

    Chant, Clarence; Wilson, Gail; Friedrich, Jan O

    2006-01-01

    Introduction Anemia among the critically ill has been described in patients with short to medium length of stay (LOS) in the intensive care unit (ICU), but it has not been described in long-stay ICU patients. This study was performed to characterize anemia, transfusion, and phlebotomy practices in patients with prolonged ICU LOS. Methods We conducted a retrospective chart review of consecutive patients admitted to a medical-surgical ICU in a tertiary care university hospital over three years; patients included had a continuous LOS in the ICU of 30 days or longer. Information on transfusion, phlebotomy, and outcomes were collected daily from days 22 to 112 of the ICU stay. Results A total of 155 patients were enrolled. The mean age, admission Acute Physiology and Chronic Health Evaluation II score, and median ICU LOS were 62.3 ± 16.3 years, 23 ± 8, and 49 days (interquartile range 36–70 days), respectively. Mean hemoglobin remained stable at 9.4 ± 1.4 g/dl from day 7 onward. Mean daily phlebotomy volume was 13.3 ± 7.3 ml, and 62% of patients received a mean of 3.4 ± 5.3 units of packed red blood cells at a mean hemoglobin trigger of 7.7 ± 0.9 g/dl after day 21. Transfused patients had significantly greater acuity of illness, phlebotomy volumes, ICU LOS and mortality, and had a lower hemoglobin than did those who were not transfused. Multivariate logistic regression analysis identified the following as independently associated with the likelihood of requiring transfusion in nonbleeding patients: baseline hemoglobin, daily phlebotomy volume, ICU LOS, and erythropoietin therapy (used almost exclusively in dialysis dependent renal failure in this cohort of patients). Small increases in average phlebotomy (3.5 ml/day, 95% confidence interval 2.4–6.8 ml/day) were associated with a doubling in the odds of being transfused after day 21. Conclusion Anemia, phlebotomy, and transfusions, despite low hemoglobin triggers, are common in ICU patients long after admission. Small decreases in phlebotomy volume are associated with significantly reduced transfusion requirements in patients with prolonged ICU LOS. PMID:17002795

  19. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  20. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    PubMed

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  2. Determination of N-methylsuccinimide and 2-hydroxy-N-methylsuccinimide in human urine and plasma.

    PubMed

    Jönsson, B A; Akesson, B

    1997-12-19

    A method for determination of N-methylsuccinimide (MSI) and 2-hydroxy-N-methylsuccinimide (2-HMSI) in human urine and of MSI in human plasma was developed. MSI and 2-HMSI are metabolites of the widely used organic solvent N-methyl-2-pyrrolidone (NMP). MSI and 2-HMSI were purified from urine and plasma by C8 solid-phase extraction and analysed by gas chromatography-mass spectrometry in the negative-ion chemical ionisation mode. The intra-day precisions in urine were 2-6% for MSI (50 and 400 ng/ml) and 3-5% for 2-HMSI (1000 and 8000 ng/ml). For MSI in plasma it was 2% (60 and 1200 ng/ml). The between-day precisions in urine were 3-4% for MSI (100 and 1000 ng/ml) and 2-4% for 2-HMSI (10,000 and 18,000 ng/ml) and 3-4% for MSI in plasma (100 and 900 ng/ml). The recoveries from urine were 109-117% for MSI (50 and 400 ng/ml) and 81-89% for 2-HMSI (1000 and 8000 ng/ml). The recovery of MSI from plasma was 91-101% (50 and 500 ng/ml). The detection limits for MSI were 3 ng/ml in urine and 1 ng/ml in plasma and that of 2-HMSI in urine was 200 ng/ml. The method is applicable for analysis of urine and plasma samples from workers exposed to NMP.

  3. Minimum effective volume of mepivacaine for ultrasound-guided supraclavicular block

    PubMed Central

    Song, Jae Gyok; Kang, Bong Jin; Park, Kee Keun

    2013-01-01

    Background The aim of this study was to estimate the minimum effective volume (MEV) of 1.5% mepivacaine for ultrasound-guided supraclavicular block by placing the needle near the lower trunk of brachial plexus and multiple injections. Methods Thirty patients undergoing forearm and hand surgery received ultrasound-guided supraclavicular block with 1.5% mepivacaine. The initial volume of local anesthetic injected was 24 ml, and local anesthetic volume for the next patient was determined by the response of the previous patient. The next patient received a 3 ml higher volume in the case of the failure of the previous case. If the previous block was successful, the next volume was 3 ml lower. MEV was estimated by the Dixon and Massey up and down method. MEV in 95, 90, and 50% of patients (MEV95, MEV90, and MEV50) were calculated using probit transformation and logistic regression. Results MEV95 of 1.5% mepivacaine was 17 ml (95% confidence interval [CI], 13-42 ml), MEV90 was 15 ml (95% CI, 12-34 ml), and MEV50 was 9 ml (95% CI, 4-12 ml). Twelve patients had a failed block. Three patients received general anesthesia. Nine patients could undergo surgery with sedation only. Only one patient showed hemi-diaphragmatic paresis. Conclusions MEV95 was 17 ml, MEV90 was 15 ml, and MEV50 was 9 ml. However, needle location near the lower trunk of brachial plexus and multiple injections should be performed. PMID:23904937

  4. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  5. Determination of plasma volume in anaesthetized piglets using the carbon monoxide (CO) method.

    PubMed

    Heltne, J K; Farstad, M; Lund, T; Koller, M E; Matre, K; Rynning, S E; Husby, P

    2002-07-01

    Based on measurements of the circulating red blood cell volume (V(RBC)) in seven anaesthetized piglets using carbon monoxide (CO) as a label, plasma volume (PV) was calculated for each animal. The increase in carboxyhaemoglobin (COHb) concentration following administration of a known amount of CO into a closed circuit re-breathing system was determined by diode-array spectrophotometry. Simultaneously measured haematocrit (HCT) and haemoglobin (Hb) values were used for PV calculation. The PV values were compared with simultaneously measured PVs determined using the Evans blue technique. Mean values (SD) for PV were 1708.6 (287.3)ml and 1738.7 (412.4)ml with the CO method and the Evans blue technique, respectively. Comparison of PVs determined with the two techniques demonstrated good correlation (r = 0.995). The mean difference between PV measurements was -29.9 ml and the limits of agreement (mean difference +/-2SD) were -289.1 ml and 229.3 ml. In conclusion, the CO method can be applied easily under general anaesthesia and controlled ventilation with a simple administration system. The agreement between the compared methods was satisfactory. Plasma volume determined with the CO method is safe, accurate and has no signs of major side effects.

  6. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  7. Comparison of a 50 mL pycnometer and a 500 mL flask, EURAMET.M.FF.S8 (EURAMET 1297)

    NASA Astrophysics Data System (ADS)

    Mićić, Ljiljana; Batista, Elsa

    2018-01-01

    The purpose of this comparison was to compare the results of the participating laboratories in the calibration of 50 mL pycnometer and 500 mL volumetric flask using the gravimetric method. Laboratories were asked to determined the 'contained' volume of the 50 mL pycnometer and of the 500 mL flask at a reference temperature of 20 °C. The gravimetric method was used for both instruments by all laboratories. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  8. Catalytic spectrophotometric determination of iodide in pharmaceutical preparations and edible salt.

    PubMed

    El-Ries, M A; Khaled, Elmorsy; Zidane, F I; Ibrahim, S A; Abd-Elmonem, M S

    2012-02-01

    The catalytic effect of iodide on the oxidation of four dyes: viz. variamine blue (VB), methylene blue (MB), rhodamine B (RB), and malachite green (MG) with different oxidizing agents was investigated for the kinetic spectrophotometric determination of iodide. The above catalyzed reactions were monitored spectrophotometrically by following the change in dye absorbances at 544, 558, 660, or 617 nm for the VB, RB, MB, or MG catalyzed reactions, respectively. Under optimum conditions, iodide can be determined within the concentration levels 0.064-1.27 µg mL(-1) for VB method, 3.20-9.54 µg mL(-1) for RB method, 5.00-19.00 µg mL(-1) for the MB method, and 6.4-19.0 µg mL(-1) for the MG one, with detection limit reaching 0.004 µg mL(-1) iodide. The reported methods were highly sensitive, selective, and free from most interference. Applying the proposed procedures, trace amounts of iodide in pharmaceutical and edible salt samples were successfully determined without separation or pretreatment steps. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Comparison of ZetaPlus 60S and nitrocellulose membrane filters for the simultaneous concentration of F-RNA coliphages, porcine teschovirus and porcine adenovirus from river water.

    PubMed

    Jones, T H; Muehlhauser, V; Thériault, G

    2014-09-01

    Increasing attention is being paid to the impact of agricultural activities on water quality to understand the impact on public health. F-RNA coliphages have been proposed as viral indicators of fecal contamination while porcine teschovirus (PTV) and porcine adenovirus (PAdV) are proposed indicators of fecal contamination of swine origin. Viruses and coliphages are present in water in very low concentrations and must be concentrated to permit their detection. There is little information comparing the effectiveness of the methods for concentrating F-RNA coliphages with concentration methods for other viruses and vice versa. The objective of this study was to compare 5 current published methods for recovering F-RNA coliphages, PTV and PAdV from river water samples concentrated by electronegative nitrocellulose membrane filters (methods A and B) or electropositive Zeta Plus 60S filters (methods C-E). Method A is used routinely for the detection of coliphages (Méndez et al., 2004) and method C (Brassard et al., 2005) is the official method in Health Canada's compendium for the detection of viruses in bottled mineral or spring water. When river water was inoculated with stocks of F-RNA MS2, PAdV, and PTV to final concentrations of 1×10(6) PFU/100 mL, 1×10(5) gc/100 mL and 3×10(5) gc/100 mL, respectively, a significantly higher recovery for each virus was consistently obtained for method A with recoveries of 52% for MS2, 95% for PAdV, and 1.5% for PTV. When method A was compared with method C for the detection of F-coliphages, PAdV and PTV in river water samples, viruses were detected with higher frequencies and at higher mean numbers with method A than with method C. With method A, F-coliphages were detected in 11/12 samples (5-154 PFU/100 mL), PTV in 12/12 samples (397-10,951 gc/100 mL), PAdV in 1/12 samples (15 gc/100 mL), and F-RNA GIII in 1/12 samples (750 gc/100 mL) while F-RNA genotypes I, II, and IV were not detected by qRT-PCR. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  10. A machine learning approach as a surrogate of finite element analysis-based inverse method to estimate the zero-pressure geometry of human thoracic aorta.

    PubMed

    Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei

    2018-05-09

    Advances in structural finite element analysis (FEA) and medical imaging have made it possible to investigate the in vivo biomechanics of human organs such as blood vessels, for which organ geometries at the zero-pressure level need to be recovered. Although FEA-based inverse methods are available for zero-pressure geometry estimation, these methods typically require iterative computation, which are time-consuming and may be not suitable for time-sensitive clinical applications. In this study, by using machine learning (ML) techniques, we developed an ML model to estimate the zero-pressure geometry of human thoracic aorta given 2 pressurized geometries of the same patient at 2 different blood pressure levels. For the ML model development, a FEA-based method was used to generate a dataset of aorta geometries of 3125 virtual patients. The ML model, which was trained and tested on the dataset, is capable of recovering zero-pressure geometries consistent with those generated by the FEA-based method. Thus, this study demonstrates the feasibility and great potential of using ML techniques as a fast surrogate of FEA-based inverse methods to recover zero-pressure geometries of human organs. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  12. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  13. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  14. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  15. An alternative method to measure the likelihood of a financial crisis in an emerging market

    NASA Astrophysics Data System (ADS)

    Özlale, Ümit; Metin-Özcan, Kıvılcım

    2007-07-01

    This paper utilizes an early warning system in order to measure the likelihood of a financial crisis in an emerging market economy. We introduce a methodology, where we can both obtain a likelihood series and analyze the time-varying effects of several macroeconomic variables on this likelihood. Since the issue is analyzed in a non-linear state space framework, the extended Kalman filter emerges as the optimal estimation algorithm. Taking the Turkish economy as our laboratory, the results indicate that both the derived likelihood measure and the estimated time-varying parameters are meaningful and can successfully explain the path that the Turkish economy had followed between 2000 and 2006. The estimated parameters also suggest that overvalued domestic currency, current account deficit and the increase in the default risk increase the likelihood of having an economic crisis in the economy. Overall, the findings in this paper suggest that the estimation methodology introduced in this paper can also be applied to other emerging market economies as well.

  16. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Minimal spanning tree algorithm for γ-ray source detection in sparse photon images: cluster parameters and selection strategies

    DOE PAGES

    Campana, R.; Bernieri, E.; Massaro, E.; ...

    2013-05-22

    We present that the minimal spanning tree (MST) algorithm is a graph-theoretical cluster-finding method. We previously applied it to γ-ray bidimensional images, showing that it is quite sensitive in finding faint sources. Possible sources are associated with the regions where the photon arrival directions clusterize. MST selects clusters starting from a particular “tree” connecting all the point of the image and performing a cut based on the angular distance between photons, with a number of events higher than a given threshold. In this paper, we show how a further filtering, based on some parameters linked to the cluster properties, canmore » be applied to reduce spurious detections. We find that the most efficient parameter for this secondary selection is the magnitudeM of a cluster, defined as the product of its number of events by its clustering degree. We test the sensitivity of the method by means of simulated and real Fermi-Large Area Telescope (LAT) fields. Our results show that √M is strongly correlated with other statistical significance parameters, derived from a wavelet based algorithm and maximum likelihood (ML) analysis, and that it can be used as a good estimator of statistical significance of MST detections. Finally, we apply the method to a 2-year LAT image at energies higher than 3 GeV, and we show the presence of new clusters, likely associated with BL Lac objects.« less

  18. Validation of different spectrophotometric methods for determination of vildagliptin and metformin in binary mixture

    NASA Astrophysics Data System (ADS)

    Abdel-Ghany, Maha F.; Abdel-Aziz, Omar; Ayad, Miriam F.; Tadros, Mariam M.

    New, simple, specific, accurate, precise and reproducible spectrophotometric methods have been developed and subsequently validated for determination of vildagliptin (VLG) and metformin (MET) in binary mixture. Zero order spectrophotometric method was the first method used for determination of MET in the range of 2-12 μg mL-1 by measuring the absorbance at 237.6 nm. The second method was derivative spectrophotometric technique; utilized for determination of MET at 247.4 nm, in the range of 1-12 μg mL-1. Derivative ratio spectrophotometric method was the third technique; used for determination of VLG in the range of 4-24 μg mL-1 at 265.8 nm. Fourth and fifth methods adopted for determination of VLG in the range of 4-24 μg mL-1; were ratio subtraction and mean centering spectrophotometric methods, respectively. All the results were statistically compared with the reported methods, using one-way analysis of variance (ANOVA). The developed methods were satisfactorily applied to analysis of the investigated drugs and proved to be specific and accurate for quality control of them in pharmaceutical dosage forms.

  19. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  20. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  1. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  2. Simultaneous analysis of eight bioactive compounds in Danning tablet by HPLC-ESI-MS and HPLC-UV.

    PubMed

    Liu, Runhui; Zhang, Jiye; Liang, Mingjin; Zhang, Weidong; Yan, Shikai; Lin, Min

    2007-02-19

    A high performance liquid chromatography (HPLC) coupled with electrospray tandem mass spectrometry (ESI-MS) and ultraviolet detector (UV) has been developed for the simultaneous analysis of eight bioactive compounds in Danning tablet (including hyperin, hesperidin, resveratrol, nobiletin, curcumine, emodin, chrysophanol, and physcion), a widely used prescription of traditional Chinese medicine (TCM). The chromatographic separation was performed on a ZORBAX Extend C(18) analytical column by gradient elution with acetonitrile and formate buffer (containing 0.05% formic acid, adjusted with triethylamine to pH 5.0) at a flow rate of 0.8 ml/min. The eight compounds in Danning tablet were identified and their MS(n) fractions were elucidated by using HPLC-ESI-MS, and the contents of these compounds were determined by using HPLC-UV method. The standard calibration curves were linear between 5.0 and 100 microg/ml for hyperin, 10-200 microg/ml for hesperidin, 1.0-150 microg/ml for resveratrol, 2.0-120 microg/ml for nobiletin, 2.0-225 microg/ml for curcumine, 20-300 microg/ml for emodin, 2.0-200 microg/ml for chrysophanol, and 20-250 microg/ml for physcion with regression coefficient r(2)>0.9995. The intra-day and inter-day precisions of this method were evaluated with the R.S.D. values less than 0.7% and 1.3%, respectively. The recoveries of the eight investigated compounds were ranged from 99.3% to 100.2% with R.S.D. values less than 1.5%. This method was successfully used to determine the 8 target compounds in 10 batches of Danning tablet.

  3. Profiling tumour heterogeneity through circulating tumour DNA in patients with pancreatic cancer

    PubMed Central

    Neal, Christopher P; Mistry, Vilas; Page, Karen; Dennison, Ashley R; Isherwood, John; Hastings, Robert; Luo, JinLi; Moore, David A; Howard, Pringle J; Miguel, Martins L; Pritchard, Catrin; Manson, Margaret; Shaw, Jacqui A

    2017-01-01

    The majority of pancreatic ductal adenocarcinomas (PDAC) are diagnosed late so that surgery is rarely curative. Earlier detection could significantly increase the likelihood of successful treatment and improve survival. The aim of the study was to provide proof of principle that point mutations in key cancer genes can be identified by sequencing circulating free DNA (cfDNA) and that this could be used to detect early PDACs and potentially, premalignant lesions, to help target early effective treatment. Targeted next generation sequencing (tNGS) analysis of mutation hotspots in 50 cancer genes was conducted in 26 patients with PDAC, 14 patients with chronic pancreatitis (CP) and 12 healthy controls with KRAS status validated by digital droplet PCR. A higher median level of total cfDNA was observed in patients with PDAC (585 ng/ml) compared to either patients with CP (300 ng/ml) or healthy controls (175 ng/ml). PDAC tissue showed wide mutational heterogeneity, whereas KRAS was the most commonly mutated gene in cfDNA of patients with PDAC and was significantly associated with a poor disease specific survival (p=0.018). This study demonstrates that tNGS of cfDNA is feasible to characterise the circulating genomic profile in PDAC and that driver mutations in KRAS have prognostic value but cannot currently be used to detect early emergence of disease. Importantly, monitoring total cfDNA levels may have utility in individuals “at risk” and warrants further investigation. PMID:29152076

  4. Isoflurane minimum alveolar concentration reduction by fentanyl.

    PubMed

    McEwan, A I; Smith, C; Dyar, O; Goodman, D; Smith, L R; Glass, P S

    1993-05-01

    Isoflurane is commonly combined with fentanyl during anesthesia. Because of hysteresis between plasma and effect site, bolus administration of fentanyl does not accurately describe the interaction between these drugs. The purpose of this study was to determine the MAC reduction of isoflurane by fentanyl when both drugs had reached steady biophase concentrations. Seventy-seven patients were randomly allocated to receive either no fentanyl or fentanyl at several predetermined plasma concentrations. Fentanyl was administered using a computer-assisted continuous infusion device. Patients were also randomly allocated to receive a predetermined steady state end-tidal concentration of isoflurane. Blood samples for fentanyl concentration were taken at 10 min after initiation of the infusion and before and immediately after skin incision. A minimum of 20 min was allowed between the start of the fentanyl infusion and skin incision. The reduction in the MAC of isoflurane by the measured fentanyl concentration was calculated using a maximum likelihood solution to a logistic regression model. There was an initial steep reduction in the MAC of isoflurane by fentanyl, with 3 ng/ml resulting in a 63% MAC reduction. A ceiling effect was observed with 10 ng/ml providing only a further 19% reduction in MAC. A 50% decrease in MAC was produced by a fentanyl concentration of 1.67 ng/ml. Defining the MAC reduction of isoflurane by all the opioids allows their more rational administration with inhalational anesthetics and provides a comparison of their relative anesthetic potencies.

  5. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    NASA Astrophysics Data System (ADS)

    Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.

    2017-10-01

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  6. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  7. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  8. Development and validation of a method for the determination of low-ppb levels of macrocyclic lactones in butter, using HPLC-fluorescence.

    PubMed

    Macedo, Fabio; Marsico, Eliane Teixeira; Conte-Júnior, Carlos Adam; de Resende, Michele Fabri; Brasil, Taila Figueiredo; Pereira Netto, Annibal Duarte

    2015-07-15

    An analytical method was developed and validated for the simultaneous determination of four macrocyclic lactones (ML) (abamectin, doramectin, ivermectin and moxidectin) in butter, using liquid chromatography with fluorescence detection. The method employed heated liquid-liquid extraction and a mixture of acetonitrile, ethyl acetate and water, with preconcentration and derivatization, to produce stable fluorescent derivatives. The chromatographic run time was <12.5 min, with excellent separation. The method validation followed international guidelines and employed fortified butter samples. The figures of merit obtained, e.g. recovery (72.4-106.5%), repeatability (8.8%), within-laboratory reproducibility (15.7%) and limits of quantification (0.09-0.16 μg kg(-1)) were satisfactory for the desired application. The application of the method to real samples showed that ML residues were present in six of the ten samples evaluated. The method proved to be simple, easy and appropriate for simultaneous determination of ML residues in butter. To our knowledge, this is the first method described for the evaluation of ML in butter. Copyright © 2015. Published by Elsevier Ltd.

  9. GPSit: An automated method for evolutionary analysis of nonculturable ciliated microeukaryotes.

    PubMed

    Chen, Xiao; Wang, Yurui; Sheng, Yalan; Warren, Alan; Gao, Shan

    2018-05-01

    Microeukaryotes are among the most important components of the microbial food web in almost all aquatic and terrestrial ecosystems worldwide. In order to gain a better understanding their roles and functions in ecosystems, sequencing coupled with phylogenomic analyses of entire genomes or transcriptomes is increasingly used to reconstruct the evolutionary history and classification of these microeukaryotes and thus provide a more robust framework for determining their systematics and diversity. More importantly, phylogenomic research usually requires high levels of hands-on bioinformatics experience. Here, we propose an efficient automated method, "Guided Phylogenomic Search in trees" (GPSit), which starts from predicted protein sequences of newly sequenced species and a well-defined customized orthologous database. Compared with previous protocols, our method streamlines the entire workflow by integrating all essential and other optional operations. In so doing, the manual operation time for reconstructing phylogenetic relationships is reduced from days to several hours, compared to other methods. Furthermore, GPSit supports user-defined parameters in most steps and thus allows users to adapt it to their studies. The effectiveness of GPSit is demonstrated by incorporating available online data and new single-cell data of three nonculturable marine ciliates (Anteholosticha monilata, Deviata sp. and Diophrys scutum) under moderate sequencing coverage (~5×). Our results indicate that the former could reconstruct robust "deep" phylogenetic relationships while the latter reveals the presence of intermediate taxa in shallow relationships. Based on empirical phylogenomic data, we also used GPSit to evaluate the impact of different levels of missing data on two commonly used methods of phylogenetic analyses, maximum likelihood (ML) and Bayesian inference (BI) methods. We found that BI is less sensitive to missing data when fast-evolving sites are removed. © 2018 John Wiley & Sons Ltd.

  10. Global identification of stochastic dynamical systems under different pseudo-static operating conditions: The functionally pooled ARMAX case

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2017-01-01

    The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.

  11. Mitochondrial DNA phylogeny of camel spiders (Arachnida: Solifugae) from Iran.

    PubMed

    Maddahi, Hassan; Khazanehdari, Mahsa; Aliabadian, Mansour; Kami, Haji Gholi; Mirshamsi, Amin; Mirshamsi, Omid

    2017-11-01

    In the present study, the mitochondrial DNA phylogeny of five solifuge families of Iran is presented using phylogenetic analysis of mitochondrial cytochrome c oxidase, subunit 1 (COI) sequence data. Moreover, we included available representatives from seven families from GenBank to examine the genetic distance between Old and New World taxa and test the phylogenetic relationships among more solifuge families. Phylogenetic relationships were reconstructed based on the two most probabilistic methods, Maximum Likelihood (ML) and Bayesian inference (BI) approaches. Resulting topologies demonstrated the monophyly of the families Daesiidae, Eremobatidae, Galeodidae, Karschiidae and Rhagodidae, whereas the monophyly of the families Ammotrechidae and Gylippidae was not supported. Also, within the family Eremobatidae, the subfamilies Eremobatinae and Therobatinae and the genus Hemerotrecha were paraphyletic or polyphyletic. According to the resulted topologies, the taxonomic placements of Trichotoma michaelseni (Gylippidae) and Nothopuga sp. 1 (Ammotrechidae) are still remain under question and their revision might be appropriate. According to the results of this study, within the family Galeodidae, the validity of the genus Galeodopsis is supported, while the validity of the genus Paragaleodes still remains uncertain. Moreover, our results revealed that the species Galeodes bacillatus, and Rhagodes melanochaetus are junior synonyms of G. caspius, and R. eylandti, respectively.

  12. Plastid phylogenomics of the cool-season grass subfamily: clarification of relationships among early-diverging tribes.

    PubMed

    Saarela, Jeffery M; Wysocki, William P; Barrett, Craig F; Soreng, Robert J; Davis, Jerrold I; Clark, Lynn G; Kelchner, Scot A; Pires, J Chris; Edger, Patrick P; Mayfield, Dustin R; Duvall, Melvin R

    2015-05-04

    Whole plastid genomes are being sequenced rapidly from across the green plant tree of life, and phylogenetic analyses of these are increasing resolution and support for relationships that have varied among or been unresolved in earlier single- and multi-gene studies. Pooideae, the cool-season grass lineage, is the largest of the 12 grass subfamilies and includes important temperate cereals, turf grasses and forage species. Although numerous studies of the phylogeny of the subfamily have been undertaken, relationships among some 'early-diverging' tribes conflict among studies, and some relationships among subtribes of Poeae have not yet been resolved. To address these issues, we newly sequenced 25 whole plastomes, which showed rearrangements typical of Poaceae. These plastomes represent 9 tribes and 11 subtribes of Pooideae, and were analysed with 20 existing plastomes for the subfamily. Maximum likelihood (ML), maximum parsimony (MP) and Bayesian inference (BI) robustly resolve most deep relationships in the subfamily. Complete plastome data provide increased nodal support compared with protein-coding data alone at nodes that are not maximally supported. Following the divergence of Brachyelytrum, Phaenospermateae, Brylkinieae-Meliceae and Ampelodesmeae-Stipeae are the successive sister groups of the rest of the subfamily. Ampelodesmeae are nested within Stipeae in the plastome trees, consistent with its hybrid origin between a phaenospermatoid and a stipoid grass (the maternal parent). The core Pooideae are strongly supported and include Brachypodieae, a Bromeae-Triticeae clade and Poeae. Within Poeae, a novel sister group relationship between Phalaridinae and Torreyochloinae is found, and the relative branching order of this clade and Aveninae, with respect to an Agrostidinae-Brizinae clade, are discordant between MP and ML/BI trees. Maximum likelihood and Bayesian analyses strongly support Airinae and Holcinae as the successive sister groups of a Dactylidinae-Loliinae clade. Published by Oxford University Press on behalf of the Annals of Botany Company.

  13. Comparison of in-house biotin-avidin tetanus IgG enzyme-linked-immunosorbent assay (ELISA) with gold standard in vivo mouse neutralization test for the detection of low level antibodies.

    PubMed

    Sonmez, Cemile; Coplu, Nilay; Gozalan, Aysegul; Akin, Lutfu; Esen, Berrin

    2017-06-01

    Detection of anti-tetanus antibody levels is necessary for both determination of the immune status of individuals and also for planning preventive measures. ELISA is the preferred test among in vitro tests however it can be affected by the cross reacting antibodies. A previously developed in-house ELISA test was found not reliable for the antibody levels ≤1.0IU/ml. A new method was developed to detect low antibody levels correctly. The aim of the present study was to compare the results of the newly developed in-house biotin-avidin tetanus IgG ELISA test with the in vivo mouse neutralization test, for the antibody levels ≤1.0IU/ml. A total of 54 serum samples with the antibody levels of three different levels, =0.01IU/ml, 0.01-0.1IU/ml, 0.1-1IU/ml, which were detected by in vivo mouse neutralization test were studied by the newly developed in-house biotin-avidin tetanus IgG ELISA test. Test was validated by using five different concentrations (0.01IU/ml, 0.06IU/ml, 0.2IU/ml, 0.5IU/ml, 1.0IU/ml). A statistically significant correlation (r 2 =0.9967 p=0,001) between in vivo mouse neutralization test and in-house biotin-avidin tetanus IgG ELISA test, was observed. For the tested concentrations intra-assay, inter-assay, accuracy, sensitivity, specificity and coefficients of variations were determined as ≤15%. In-house biotin-avidin tetanus IgG ELISA test can be an alternative method to in vivo mouse neutralization method for the detection of levels ≤1.0IU/ml. By using in-house biotin-avidin tetanus IgG ELISA test, individuals with non protective levels, will be reliably detected. Copyright © 2017. Published by Elsevier B.V.

  14. Activities of E1210 and comparator agents tested by CLSI and EUCAST broth microdilution methods against Fusarium and Scedosporium species identified using molecular methods.

    PubMed

    Castanheira, Mariana; Duncanson, Frederick P; Diekema, Daniel J; Guarro, Josep; Jones, Ronald N; Pfaller, Michael A

    2012-01-01

    Fusarium (n = 67) and Scedosporium (n = 63) clinical isolates were tested by two reference broth microdilution (BMD) methods against a novel broad-spectrum (active against both yeasts and molds) antifungal, E1210, and comparator agents. E1210 inhibits the inositol acylation step in glycophosphatidylinositol (GPI) biosynthesis, resulting in defects in fungal cell wall biosynthesis. Five species complex organisms/species of Fusarium (4 isolates unspeciated) and 28 Scedosporium apiospermum, 7 Scedosporium aurantiacum, and 28 Scedosporium prolificans species were identified by molecular techniques. Comparator antifungal agents included anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B. E1210 was highly active against all of the tested isolates, with minimum effective concentration (MEC)/MIC(90) values (μg/ml) for E1210, anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B, respectively, for Fusarium of 0.12, >16, >16, >8, >8, 8, and 4 μg/ml. E1210 was very potent against the Scedosporium spp. tested. The E1210 MEC(90) was 0.12 μg/ml for S. apiospermum, but 1 to >8 μg/ml for other tested agents. Against S. aurantiacum, the MEC(50) for E1210 was 0.06 μg/ml versus 0.5 to >8 μg/ml for the comparators. Against S. prolificans, the MEC(90) for E1210 was only 0.12 μg/ml, compared to >4 μg/ml for amphotericin B and >8 μg/ml for itraconazole, posaconazole, and voriconazole. Both CLSI and EUCAST methods were highly concordant for E1210 and all comparator agents. The essential agreement (EA; ±2 doubling dilutions) was >93% for all comparisons, with the exception of posaconazole and F. oxysporum species complex (SC) (60%), posaconazole and S. aurantiacum (85.7%), and voriconazole and S. aurantiacum (85.7%). In conclusion, E1210 exhibited very potent and broad-spectrum antifungal activity against azole- and amphotericin B-resistant strains of Fusarium spp. and Scedosporium spp. Furthermore, in vitro susceptibility testing of E1210 against isolates of Fusarium and Scedosporium may be accomplished using either of the CLSI or EUCAST BMD methods, each producing very similar results.

  15. Activities of E1210 and Comparator Agents Tested by CLSI and EUCAST Broth Microdilution Methods against Fusarium and Scedosporium Species Identified Using Molecular Methods

    PubMed Central

    Duncanson, Frederick P.; Diekema, Daniel J.; Guarro, Josep; Jones, Ronald N.; Pfaller, Michael A.

    2012-01-01

    Fusarium (n = 67) and Scedosporium (n = 63) clinical isolates were tested by two reference broth microdilution (BMD) methods against a novel broad-spectrum (active against both yeasts and molds) antifungal, E1210, and comparator agents. E1210 inhibits the inositol acylation step in glycophosphatidylinositol (GPI) biosynthesis, resulting in defects in fungal cell wall biosynthesis. Five species complex organisms/species of Fusarium (4 isolates unspeciated) and 28 Scedosporium apiospermum, 7 Scedosporium aurantiacum, and 28 Scedosporium prolificans species were identified by molecular techniques. Comparator antifungal agents included anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B. E1210 was highly active against all of the tested isolates, with minimum effective concentration (MEC)/MIC90 values (μg/ml) for E1210, anidulafungin, caspofungin, itraconazole, posaconazole, voriconazole, and amphotericin B, respectively, for Fusarium of 0.12, >16, >16, >8, >8, 8, and 4 μg/ml. E1210 was very potent against the Scedosporium spp. tested. The E1210 MEC90 was 0.12 μg/ml for S. apiospermum, but 1 to >8 μg/ml for other tested agents. Against S. aurantiacum, the MEC50 for E1210 was 0.06 μg/ml versus 0.5 to >8 μg/ml for the comparators. Against S. prolificans, the MEC90 for E1210 was only 0.12 μg/ml, compared to >4 μg/ml for amphotericin B and >8 μg/ml for itraconazole, posaconazole, and voriconazole. Both CLSI and EUCAST methods were highly concordant for E1210 and all comparator agents. The essential agreement (EA; ±2 doubling dilutions) was >93% for all comparisons, with the exception of posaconazole and F. oxysporum species complex (SC) (60%), posaconazole and S. aurantiacum (85.7%), and voriconazole and S. aurantiacum (85.7%). In conclusion, E1210 exhibited very potent and broad-spectrum antifungal activity against azole- and amphotericin B-resistant strains of Fusarium spp. and Scedosporium spp. Furthermore, in vitro susceptibility testing of E1210 against isolates of Fusarium and Scedosporium may be accomplished using either of the CLSI or EUCAST BMD methods, each producing very similar results. PMID:22083469

  16. Performance Characteristics of the QUANTIPLEX HIV-1 RNA 3.0 Assay for Detection and Quantitation of Human Immunodeficiency Virus Type 1 RNA in Plasma

    PubMed Central

    Erice, Alejo; Brambilla, Donald; Bremer, James; Jackson, J. Brooks; Kokka, Robert; Yen-Lieberman, Belinda; Coombs, Robert W.

    2000-01-01

    The QUANTIPLEX HIV-1 RNA assay, version 3.0 (a branched DNA, version 3.0, assay [bDNA 3.0 assay]), was evaluated by analyzing spiked and clinical plasma samples and was compared with the AMPLICOR HIV-1 MONITOR Ultrasensitive (ultrasensitive reverse transcription-PCR [US-RT-PCR]) method. A panel of spiked plasma samples that contained 0 to 750,000 copies of human immunodeficiency virus type 1 (HIV-1) RNA per ml was tested four times in each of four laboratories (1,344 assays). Negative results (<50 copies/ml) were obtained in 30 of 32 (94%) assays with seronegative samples, 66 of 128 (52%) assays with HIV-1 RNA at 50 copies/ml, and 5 of 128 (4%) assays with HIV-1 RNA at 100 copies/ml. The assay was linear from 100 to 500,000 copies/ml. The within-run standard deviation (SD) of the log10 estimated HIV-1 RNA concentration was 0.08 at 1,000 to 500,000 copies/ml, increased below 1,000 copies/ml, and was 0.17 at 100 copies/ml. Between-run reproducibility at 100 to 500 copies/ml was <0.10 log10 in most comparisons. Interlaboratory differences across runs were ≤0.10 log10 at all concentrations examined. A subset of the panel (25 to 500 copies/ml) was also analyzed by the US-RT-PCR assay. The within-run SD varied inversely with the log10 HIV-1 RNA concentration but was higher than the SD for the bDNA 3.0 assay at all concentrations. Log-log regression analysis indicated that the two methods produced very similar estimates at 100 to 500 copies/ml. In parallel testing of clinical specimens with low HIV-1 RNA levels, 80 plasma samples with <50 copies/ml by the US-RT-PCR assay had <50 copies/ml when they were retested by the bDNA 3.0 assay. In contrast, 11 of 78 (14%) plasma samples with <50 copies/ml by the bDNA 3.0 assay had ≥50 copies/ml when they were retested by the US-RT-PCR assay (median, 86 copies/ml; range, 50 to 217 copies/ml). Estimation of bDNA 3.0 values of <50 copies/ml by extending the standard curve of the assay showed that these samples with discrepant results had higher HIV-1 RNA levels than the samples with concordant results (median, 34 versus 17 copies/ml; P = 0.0051 by the Wilcoxon two-sample test). The excellent reproducibility, broad linear range, and good sensitivity of the bDNA 3.0 assay make it a very attractive method for quantitation of HIV-1 RNA levels in plasma. PMID:10921936

  17. [Determination of residual solvents in 7-amino-3-chloro cephalosporanic acid by gas chromatography].

    PubMed

    Ma, Li; Yao, Tong-wei

    2011-01-01

    To develop a gas chromatography method for determination of residual solvents in 7-amino-3-chloro cephalosporanic acid (7-ACCA). The residual levels of acetone, methanol, dichloromethane, ethyl acetate, isobutanol, pyridine and toluene in 7-ACCA were measured by gas chromatography using Agilent INNOWAX capillary column (30 m × 0.32 mm,0.5 μm). The initial column temperature was 70° maintained for 6 min and then raised (10°C/min) to 160°C for 1 min. Nitrogen gas was used as carrier and FID as detector. The flow of carrier was 1.0 ml/min, the temperature of injection port and detector was 200°C and 250°C, respectively. The limits of detection for acetone, methanol, dichloromethane, ethyl acetate, isobutanol, pyridine, toluene in 7-ACCA were 2.5 μg/ml, 1.5 μg/ml, 15 μg/ml, 2.5 μg/ml, 2.5 μg/ml, 2.5 μg/ml and 11 μg/ml, respectively. Only acetone was detected in the sample, and was less than the limits of Ch.P. The method can effectively detect the residual solvents in 7-ACCA.

  18. Phylogeny of Morella rubra and Its Relatives (Myricaceae) and Genetic Resources of Chinese Bayberry Using RAD Sequencing

    PubMed Central

    Liu, Luxian; Jin, Xinjie; Chen, Nan; Li, Xian; Li, Pan; Fu, Chengxin

    2015-01-01

    Phylogenetic relationships among Chinese species of Morella (Myricaceae) are unresolved. Here, we use restriction site-associated DNA sequencing (RAD-seq) to identify candidate loci that will help in determining phylogenetic relationships among Morella rubra, M. adenophora, M. nana and M. esculenta. Three methods for inferring phylogeny, maximum parsimony (MP), maximum likelihood (ML) and Bayesian concordance, were applied to data sets including as many as 4253 RAD loci with 8360 parsimony informative variable sites. All three methods significantly favored the topology of (((M. rubra, M. adenophora), M. nana), M. esculenta). Two species from North America (M. cerifera and M. pensylvanica) were placed as sister to the four Chinese species. According to BEAST analysis, we deduced speciation of M. rubra to be at about the Miocene-Pliocene boundary (5.28 Ma). Intraspecific divergence in M. rubra occurred in the late Pliocene (3.39 Ma). From pooled data, we assembled 29378, 21902 and 23552 de novo contigs with an average length of 229, 234 and 234 bp for M. rubra, M. nana and M. esculenta respectively. The contigs were used to investigate functional classification of RAD tags in a BLASTX search. Additionally, we identified 3808 unlinked SNP sites across the four populations of M. rubra and discovered genes associated with fruit ripening and senescence, fruit quality and disease/defense metabolism based on KEGG database. PMID:26431030

  19. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  1. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  2. Comparison of macronutrient contents in human milk measured using mid-infrared human milk analyser in a field study vs. chemical reference methods.

    PubMed

    Zhu, Mei; Yang, Zhenyu; Ren, Yiping; Duan, Yifan; Gao, Huiyu; Liu, Biao; Ye, Wenhui; Wang, Jie; Yin, Shian

    2017-01-01

    Macronutrient contents in human milk are the common basis for estimating these nutrient requirements for both infants and lactating women. A mid-infrared human milk analyser (HMA, Miris, Sweden) was recently developed for determining macronutrient levels. The purpose of the study is to compare the accuracy and precision of HMA method with fresh milk samples in the field studies with chemical methods with frozen samples in the lab. Full breast milk was collected using electric pumps and fresh milk was analyzed in the field studies using HMA. All human milk samples were thawed and analyzed with chemical reference methods in the lab. The protein, fat and total solid levels were significantly correlated between the two methods and the correlation coefficient was 0.88, 0.93 and 0.78, respectively (p  <  0.001). The mean protein content was significantly lower and the mean fat level was significantly greater when measured using HMA method (1.0 g 100 mL -1 vs 1.2 g 100 mL -1 and 3. 7 g 100 mL -1 vs 3.2 g 100 mL -1 , respectively, p  <  0.001). Thus, linear recalibration could be used to improve mean estimation for both protein and fat. There was no significant correlation for lactose between the two methods (p  >  0.05). There was no statistically significant difference in the mean total solid concentration (12.2 g 100 mL -1 vs 12.3 g 100 mL -1 , p  >  0.05). Overall, HMA might be used to analyze macronutrients in fresh human milk with acceptable accuracy and precision after recalibrating fat and protein levels of field samples. © 2016 John Wiley & Sons Ltd.

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. Pre-test probability of obstructive coronary stenosis in patients undergoing coronary CT angiography: Comparative performance of the modified diamond-Forrester algorithm versus methods incorporating cardiovascular risk factors.

    PubMed

    Ferreira, António Miguel; Marques, Hugo; Tralhão, António; Santos, Miguel Borges; Santos, Ana Rita; Cardoso, Gonçalo; Dores, Hélder; Carvalho, Maria Salomé; Madeira, Sérgio; Machado, Francisco Pereira; Cardim, Nuno; de Araújo Gonçalves, Pedro

    2016-11-01

    Current guidelines recommend the use of the Modified Diamond-Forrester (MDF) method to assess the pre-test likelihood of obstructive coronary artery disease (CAD). We aimed to compare the performance of the MDF method with two contemporary algorithms derived from multicenter trials that additionally incorporate cardiovascular risk factors: the calculator-based 'CAD Consortium 2' method, and the integer-based CONFIRM score. We assessed 1069 consecutive patients without known CAD undergoing coronary CT angiography (CCTA) for stable chest pain. Obstructive CAD was defined as the presence of coronary stenosis ≥50% on 64-slice dual-source CT. The three methods were assessed for calibration, discrimination, net reclassification, and changes in proposed downstream testing based upon calculated pre-test likelihoods. The observed prevalence of obstructive CAD was 13.8% (n=147). Overestimations of the likelihood of obstructive CAD were 140.1%, 9.8%, and 18.8%, respectively, for the MDF, CAD Consortium 2 and CONFIRM methods. The CAD Consortium 2 showed greater discriminative power than the MDF method, with a C-statistic of 0.73 vs. 0.70 (p<0.001), while the CONFIRM score did not (C-statistic 0.71, p=0.492). Reclassification of pre-test likelihood using the 'CAD Consortium 2' or CONFIRM scores resulted in a net reclassification improvement of 0.19 and 0.18, respectively, which would change the diagnostic strategy in approximately half of the patients. Newer risk factor-encompassing models allow for a more precise estimation of pre-test probabilities of obstructive CAD than the guideline-recommended MDF method. Adoption of these scores may improve disease prediction and change the diagnostic pathway in a significant proportion of patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Comparison of two immunoradiometric assays for serum thyrotropin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinin, B.; Drew, H.; La France, N.

    1985-05-01

    An ultra-sensitive TSH assay capable of detecting subnormal TSH levels would be useful in confirming suppressed pituitary function as seen in hyperthyroidism. Two sensitive immunoradiometric TSH assays (IRMA's) were studied to determine how well they distinguished thyrotoxic patients from normal subjects. Serono Diagnostics' method employs three monoclonal antibodies specific for different regions of the TSH molecule with a minimum detectable dose (MDD) limit of 0.1 ..mu..IU/ml. Precision studies using a low TSH control in the 1.8 ..mu..IU/ml range gave CV's of 15.0%. Boots-Celltech Diagnostics method is a two site IRMA using two monoclonal antibodies. The MDD limit is 0.05 ..mu..IU/mlmore » with precision CV's of 29.3% at a TSH control range of 0.62 ..mu..IU/ml. In 24 chemically thyrotoxic patients, the mean serum TSH concentration was significantly lower than in the normal control subjects: for Serono, 0.19 ..mu..IU/ml vs. 2.34 ..mu..IU/ml and for Boots Celltech, 0.18 IU/ml vs 2.06 ..mu..IU/ml. The range of TSH was 0 to 0.5 ..mu..IU/ml in thyrotoxic patients using Serono with the exception of one patient having a TSH value of 0.8 ..mu..IU/ml. The normal range was 0.6 to 6.0 ..mu..IU/ml. For Boots Celltech the thyrotoxic range was 0 to 0.2 ..mu..IU/ml with that same thyrotoxic patient giving a TSH value of 0.7 ..mu..IU/ml with a normal range of 0.6 to 5.0 IU/ml. Serum TSH measurements using both procedures are highly sensitive for distinguishing thyrotoxic patients from normal subjects and are useful to confirm suppressed pituitary function.« less

  7. Spectroflourometric and spectrophotometric methods for the determination of sitagliptin in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product.

    PubMed

    El-Bagary, Ramzia I; Elkady, Ehab F; Ayoub, Bassam M

    2011-03-01

    Simple, accurate and precise spectroflourometric and spectrophotometric methods have been developed and validated for the determination of sitagliptin phosphate monohydrate (STG) and metformin HCL (MET). Zero order, first derivative, ratio derivative spectrophotometric methods and flourometric methods have been developed. The zero order spectrophotometric method was used for the determination of STG in the range of 50-300 μg mL(-1). The first derivative spectrophotometric method was used for the determination of MET in the range of 2-12 μg mL(-1) and STG in the range of 50-300 μg mL(-1) by measuring the peak amplitude at 246.5 nm and 275 nm, respectively. The first derivative of ratio spectra spectrophotometric method used the peak amplitudes at 232 nm and 239 nm for the determination of MET in the range of 2-12 μg mL(-1). The flourometric method was used for the determination of STG in the range of 0.25-110 μg mL(-1). The proposed methods used to determine each drug in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product that is obtained after alkaline hydrolysis of sitagliptin. The results were statistically compared using one-way analysis of variance (ANOVA). The methods developed were satisfactorily applied to the analysis of the pharmaceutical formulations and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms.

  8. Spectroflourometric and Spectrophotometric Methods for the Determination of Sitagliptin in Binary Mixture with Metformin and Ternary Mixture with Metformin and Sitagliptin Alkaline Degradation Product

    PubMed Central

    El-Bagary, Ramzia I.; Elkady, Ehab F.; Ayoub, Bassam M.

    2011-01-01

    Simple, accurate and precise spectroflourometric and spectrophotometric methods have been developed and validated for the determination of sitagliptin phosphate monohydrate (STG) and metformin HCL (MET). Zero order, first derivative, ratio derivative spectrophotometric methods and flourometric methods have been developed. The zero order spectrophotometric method was used for the determination of STG in the range of 50-300 μg mL-1. The first derivative spectrophotometric method was used for the determination of MET in the range of 2–12 μg mL-1 and STG in the range of 50-300 μg mL-1 by measuring the peak amplitude at 246.5 nm and 275 nm, respectively. The first derivative of ratio spectra spectrophotometric method used the peak amplitudes at 232 nm and 239 nm for the determination of MET in the range of 2–12 μg mL-1. The flourometric method was used for the determination of STG in the range of 0.25-110 μg mL-1. The proposed methods used to determine each drug in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product that is obtained after alkaline hydrolysis of sitagliptin. The results were statistically compared using one-way analysis of variance (ANOVA). The methods developed were satisfactorily applied to the analysis of the pharmaceutical formulations and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms. PMID:23675222

  9. Extreme data compression for the CMB

    NASA Astrophysics Data System (ADS)

    Zablocki, Alan; Dodelson, Scott

    2016-04-01

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.

  10. jTraML: an open source Java API for TraML, the PSI standard for sharing SRM transitions.

    PubMed

    Helsens, Kenny; Brusniak, Mi-Youn; Deutsch, Eric; Moritz, Robert L; Martens, Lennart

    2011-11-04

    We here present jTraML, a Java API for the Proteomics Standards Initiative TraML data standard. The library provides fully functional classes for all elements specified in the TraML XSD document, as well as convenient methods to construct controlled vocabulary-based instances required to define SRM transitions. The use of jTraML is demonstrated via a two-way conversion tool between TraML documents and vendor specific files, facilitating the adoption process of this new community standard. The library is released as open source under the permissive Apache2 license and can be downloaded from http://jtraml.googlecode.com . TraML files can also be converted online at http://iomics.ugent.be/jtraml .

  11. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  12. Phylogenetic position of Loricifera inferred from nearly complete 18S and 28S rRNA gene sequences.

    PubMed

    Yamasaki, Hiroshi; Fujimoto, Shinta; Miyazaki, Katsumi

    2015-01-01

    Loricifera is an enigmatic metazoan phylum; its morphology appeared to place it with Priapulida and Kinorhyncha in the group Scalidophora which, along with Nematoida (Nematoda and Nematomorpha), comprised the group Cycloneuralia. Scarce molecular data have suggested an alternative phylogenetic hypothesis, that the phylum Loricifera is a sister taxon to Nematomorpha, although the actual phylogenetic position of the phylum remains unclear. Ecdysozoan phylogeny was reconstructed through maximum-likelihood (ML) and Bayesian inference (BI) analyses of nuclear 18S and 28S rRNA gene sequences from 60 species representing all eight ecdysozoan phyla, and including a newly collected loriciferan species. Ecdysozoa comprised two clades with high support values in both the ML and BI trees. One consisted of Priapulida and Kinorhyncha, and the other of Loricifera, Nematoida, and Panarthropoda (Tardigrada, Onychophora, and Arthropoda). The relationships between Loricifera, Nematoida, and Panarthropoda were not well resolved. Loricifera appears to be closely related to Nematoida and Panarthropoda, rather than grouping with Priapulida and Kinorhyncha, as had been suggested by previous studies. Thus, both Scalidophora and Cycloneuralia are a polyphyletic or paraphyletic groups. In addition, Loricifera and Nematomorpha did not emerge as sister groups.

  13. A generalized gamma mixture model for ultrasonic tissue characterization.

    PubMed

    Vegas-Sanchez-Ferrero, Gonzalo; Aja-Fernandez, Santiago; Palencia, Cesar; Martin-Fernandez, Marcos

    2012-01-01

    Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images.

  14. A Generalized Gamma Mixture Model for Ultrasonic Tissue Characterization

    PubMed Central

    Palencia, Cesar; Martin-Fernandez, Marcos

    2012-01-01

    Several statistical models have been proposed in the literature to describe the behavior of speckles. Among them, the Nakagami distribution has proven to very accurately characterize the speckle behavior in tissues. However, it fails when describing the heavier tails caused by the impulsive response of a speckle. The Generalized Gamma (GG) distribution (which also generalizes the Nakagami distribution) was proposed to overcome these limitations. Despite the advantages of the distribution in terms of goodness of fitting, its main drawback is the lack of a closed-form maximum likelihood (ML) estimates. Thus, the calculation of its parameters becomes difficult and not attractive. In this work, we propose (1) a simple but robust methodology to estimate the ML parameters of GG distributions and (2) a Generalized Gama Mixture Model (GGMM). These mixture models are of great value in ultrasound imaging when the received signal is characterized by a different nature of tissues. We show that a better speckle characterization is achieved when using GG and GGMM rather than other state-of-the-art distributions and mixture models. Results showed the better performance of the GG distribution in characterizing the speckle of blood and myocardial tissue in ultrasonic images. PMID:23424602

  15. Measurement of limb volume: laser scanning versus volume displacement.

    PubMed

    McKinnon, John Gregory; Wong, Vanessa; Temple, Walley J; Galbraith, Callum; Ferry, Paul; Clynch, George S; Clynch, Colin

    2007-10-01

    Determining the prevalence and treatment success of surgical lymphedema requires accurate and reproducible measurement. A new method of measurement of limb volume is described. A series of inanimate objects of known and unknown volume was measured using digital laser scanning and water displacement. A similar comparison was made with 10 human volunteers. Digital scanning was evaluated by comparison to the established method of water displacement, then to itself to determine reproducibility of measurement. (1) Objects of known volume: Laser scanning accurately measured the calculated volume but water displacement became less accurate as the size of the object increased. (2) Objects of unknown volume: As average volume increased, there was an increasing bias of underestimation of volume by the water displacement method. The coefficient of reproducibility of water displacement was 83.44 ml. In contrast, the reproducibility of the digital scanning method was 19.0 ml. (3) Human data: The mean difference between water displacement volume and laser scanning volume was 151.7 ml (SD +/- 189.5). The coefficient of reproducibility of water displacement was 450.8 ml whereas for laser scanning it was 174 ml. Laser scanning is an innovative method of measuring tissue volume that combines precision and reproducibility and may have clinical utility for measuring lymphedema. 2007 Wiley-Liss, Inc

  16. Face mask ventilation in edentulous patients: a comparison of mandibular groove and lower lip placement.

    PubMed

    Racine, Stéphane X; Solis, Audrey; Hamou, Nora Ait; Letoumelin, Philippe; Hepner, David L; Beloucif, Sadek; Baillard, Christophe

    2010-05-01

    In edentulous patients, it may be difficult to perform face mask ventilation because of inadequate seal with air leaks. Our aim was to ascertain whether the "lower lip" face mask placement, as a new face mask ventilation method, is more effective at reducing air leaks than the standard face mask placement. Forty-nine edentulous patients with inadequate seal and air leak during two-hand positive-pressure ventilation using the ventilator circle system were prospectively evaluated. In the presence of air leaks, defined as a difference of at least 33% between inspired and expired tidal volumes, the mask was placed in a lower lip position by repositioning the caudal end of the mask above the lower lip while maintaining the head in extension. The results are expressed as mean +/- SD or median (25th-75th percentiles). Patient characteristics included age (71 +/- 11 yr) and body mass index (24 +/- 4 kg/m2). By using the standard method, the median inspired and expired tidal volumes were 450 ml (400-500 ml) and 0 ml (0-50 ml), respectively, and the median air leak was 400 ml (365-485 ml). After placing the mask in the lower lip position, the median expired tidal volume increased to 400 ml (380-490), and the median air leak decreased to 10 ml (0-20 ml) (P < 0.001 vs. standard method). The lower lip face mask placement with two hands reduced the air leak by 95% (80-100%). In edentulous patients with inadequate face mask ventilation, the lower lip face mask placement with two hands markedly reduced the air leak and improved ventilation.

  17. In vitro activities of dalbavancin and nine comparator agents against anaerobic gram-positive species and corynebacteria.

    PubMed

    Goldstein, Ellie J C; Citron, Diane M; Merriam, C Vreni; Warren, Yumi; Tyrrell, Kerin; Fernandez, Helen T

    2003-06-01

    Dalbavancin is a novel semisynthetic glycopeptide with enhanced activity against gram-positive species. Its comparative in vitro activities and those of nine comparator agents, including daptomycin, vancomycin, linezolid, and quinupristin-dalfopristin, against 290 recent gram-positive clinical isolates strains, as determined by the NCCLS agar dilution method, were studied. The MICs of dalbavancin at which 90% of various isolates tested were inhibited were as follows: Actinomyces spp., 0.5 microg/ml; Clostridium clostridioforme, 8 microg/ml; C. difficile, 0.25 microg/ml; C. innocuum, 0.25 microg/ml; C. perfringens, 0.125 microg/ml; C. ramosum, 1 microg/ml; Eubacterium spp., 1 microg/ml; Lactobacillus spp., >32 microg/ml, Propionibacterium spp., 0.5 microg/ml; and Peptostreptococcus spp., 0.25 microg/ml. Dalbavancin was 1 to 3 dilutions more active than vancomycin against most strains. Dalbavancin exhibited excellent activity against gram-positive strains tested and warrants clinical evaluation.

  18. Comparative activity of several beta-lactam antibiotics against anaerobes determined by two methods.

    PubMed

    Zabransky, R J; Birk, R J

    1987-01-01

    The susceptibility of 120 strains of several species of anaerobes to a number of second and third generation beta-lactam antibiotics was determined by the National Committee for Clinical Laboratory Standards reference agar dilution and microdilution methods. The antibiotics tested were cefoperazone, cefotaxime, cefotetan, ceftizoxime, cefoxitin, and imipenem. The MIC50s ranged from 0.125 to 16 micrograms/ml. The MIC90s were lowest with imipenem at 0.5 micrograms/ml, followed by cefoxitin at 32 micrograms/ml; they were highest with cefotetan at 128 micrograms/ml and were 64 micrograms/ml with the others. In vitro drug activity varied with the antibiotic, the organism, the method used, and the breakpoint selected. Rates of resistance varied considerably between the taxonomic groups of organisms tested and also among species within a group. Overall, reproducibility with the agar dilution method ranged from 44% to 85%; testing with ceftizoxime was the least reproducible. Microdilution results agreed within +/- 1 dilution of the agar dilution mode 79% to 95% of the time, with some variation between drugs and organisms tested. Because there were distinct differences in the activity of some drugs against certain species, no antibiotic can substitute for others in in vitro testing.

  19. Validated enantiospecific LC method for determination of (R)-enantiomer impurity in (S)-efavirenz.

    PubMed

    Seshachalam, U; Narasimha Rao, D V L; Chandrasekhar, K B

    2008-02-01

    A high-performance liquid chromatographic method was developed for separation of the enantiomers of efavirenz. The developed method was applied for the determination of (R)-enantiomer in (S)-efavirenz and satisfactory results were achieved. The base line separation with a resolution of more than 4.0 was achieved on Chiralcel OD (250 mm x 4.6 mm, 10 microm) column containing tris-(3,5-dimethylphenylcarbomate) as stationary phase. The mobile phase consists of n-hexane: isopropyl alcohol (80:20 v/v) with 0.1% (v/v) of formic acid as additive. The flow rate was kept at 1.0 ml/min and the UV detection was monitored at 254 nm. The (R)-enantiomer was found linear over the range of 0.1 microg/ml--6 microg/ml. The limit of detection (LOD) was 0.03 microg/ml and the limit of quantification (LOQ) was 0.1 microg/ml (n=3. The precision of (R)-enantiomer at LOQ level was evaluated through six replicate injections and the RSD of the peak response was achieved as 1.34%. The results demonstrated that the developed LC method was simple, precise, robust and applicable for the purity determination of efavirenz.

  20. A multi-valued neutrosophic qualitative flexible approach based on likelihood for multi-criteria decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.

    2017-01-01

    In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.

  1. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. Wemore » find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.« less

  2. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  3. Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.

    PubMed

    Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D

    2012-01-01

    Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.

  4. Evidence and Clinical Trials.

    NASA Astrophysics Data System (ADS)

    Goodman, Steven N.

    1989-11-01

    This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.

  5. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  6. White Gaussian Noise - Models for Engineers

    NASA Astrophysics Data System (ADS)

    Jondral, Friedrich K.

    2018-04-01

    This paper assembles some information about white Gaussian noise (WGN) and its applications. It starts from a description of thermal noise, i. e. the irregular motion of free charge carriers in electronic devices. In a second step, mathematical models of WGN processes and their most important parameters, especially autocorrelation functions and power spectrum densities, are introduced. In order to proceed from mathematical models to simulations, we discuss the generation of normally distributed random numbers. The signal-to-noise ratio as the most important quality measure used in communications, control or measurement technology is accurately introduced. As a practical application of WGN, the transmission of quadrature amplitude modulated (QAM) signals over additive WGN channels together with the optimum maximum likelihood (ML) detector is considered in a demonstrative and intuitive way.

  7. Is Mistletoe Treatment Beneficial in Invasive Breast Cancer? A New Approach to an Unresolved Problem.

    PubMed

    Fritz, Peter; Dippon, Jürgen; Müller, Simon; Goletz, Sven; Trautmann, Christian; Pappas, Xenophon; Ott, German; Brauch, Hiltrud; Schwab, Matthias; Winter, Stefan; Mürdter, Thomas; Brinkmann, Friedhelm; Faisst, Simone; Rössle, Susanne; Gerteis, Andreas; Friedel, Godehard

    2018-03-01

    In this retrospective study, we compared breast cancer patients treated with and without mistletoe lectin I (ML-I) in addition to standard breast cancer treatment in order to determine a possible effect of this complementary treatment. This study included 18,528 patients with invasive breast cancer. Data on additional ML-I treatments were reported for 164 patients. We developed a "similar case" method with a distance measure retrieved from the beta variable in Cox regression to compare these patients, after stage adjustment, with their non-ML-1 treated counterparts in order to answer three hypotheses concerning overall survival, recurrence free survival and life quality. Raw data analysis of an additional ML-I treatment yielded a worse outcome (p=0.02) for patients with ML treatment, possibly due to a bias inherent in the ML-I-treated patients. Using the "similar case" method (a case-based reasoning approach) we could not confirm this harm for patients using ML-I. Analysis of life quality data did not demonstrate reliable differences between patients treated with ML-I treatment and those without proven ML-I treatment. Based on a "similar case" model we did not observe any differences in the overall survival (OS), recurrence-free survival (RFS), and quality of life data between breast cancer patients with standard treatment and those who in addition to standard treatment received ML-I treatment. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  8. Probabilistic classification method on multi wavelength chromatographic data for photosynthetic pigments identification

    NASA Astrophysics Data System (ADS)

    Prilianti, K. R.; Setiawan, Y.; Indriatmoko, Adhiwibawa, M. A. S.; Limantara, L.; Brotosudarmo, T. H. P.

    2014-02-01

    Environmental and health problem caused by artificial colorant encourages the increasing usage of natural colorant nowadays. Natural colorant refers to the colorant that is derivate from living organism or minerals. Extensive research topic has been done to exploit these colorant, but recent data shows that only 0.5% of the wide range of plant pigments in the earth has been exhaustively used. Hence development of the pigment characterization technique is an important consideration. High-performance liquid chromatography (HPLC) is a widely used technique to separate pigments in a mixture and identify it. In former HPLC fingerprinting, pigment characterization was based on a single chromatogram from a fixed wavelength (one dimensional) and discard the information contained at other wavelength. Therefore, two dimensional fingerprints have been proposed to use more chromatographic information. Unfortunately this method leads to the data processing problem due to the size of its data matrix. The other common problem in the chromatogram analysis is the subjectivity of the researcher in recognizing the chromatogram pattern. In this research an automated analysis method of the multi wavelength chromatographic data was proposed. Principal component analysis (PCA) was used to compress the data matrix and Maximum Likelihood (ML) classification was applied to identify the chromatogram pattern of the existing pigments in a mixture. Three photosynthetic pigments were selected to show the proposed method. Those pigments are β-carotene, fucoxanthin and zeaxanthin. The result suggests that the method could well inform the existence of the pigments in a particular mixture. A simple computer application was also developed to facilitate real time analysis. Input of the application is multi wavelength chromatographic data matrix and the output is information about the existence of the three pigments.

  9. Y-90 SPECT ML image reconstruction with a new model for tissue-dependent bremsstrahlung production using CT information: a proof-of-concept study

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Fessler, Jeffrey A.; Wilderman, Scott J.; Brooks, Allen F.; Dewaraja, Yuni K.

    2018-06-01

    While the yield of positrons used in Y-90 PET is independent of tissue media, Y-90 SPECT imaging is complicated by the tissue dependence of bremsstrahlung photon generation. The probability of bremsstrahlung production is proportional to the square of the atomic number of the medium. Hence, the same amount of activity in different tissue regions of the body will produce different numbers of bremsstrahlung photons. Existing reconstruction methods disregard this tissue-dependency, potentially impacting both qualitative and quantitative imaging of heterogeneous regions of the body such as bone with marrow cavities. In this proof-of-concept study, we propose a new maximum-likelihood method that incorporates bremsstrahlung generation probabilities into the system matrix, enabling images of the desired Y-90 distribution to be reconstructed instead of the ‘bremsstrahlung distribution’ that is obtained with existing methods. The tissue-dependent probabilities are generated by Monte Carlo simulation while bone volume fractions for each SPECT voxel are obtained from co-registered CT. First, we demonstrate the tissue dependency in a SPECT/CT imaging experiment with Y-90 in bone equivalent solution and water. Visually, the proposed reconstruction approach better matched the true image and the Y-90 PET image than the standard bremsstrahlung reconstruction approach. An XCAT phantom simulation including bone and marrow regions also demonstrated better agreement with the true image using the proposed reconstruction method. Quantitatively, compared with the standard reconstruction, the new method improved estimation of the liquid bone:water activity concentration ratio by 40% in the SPECT measurement and the cortical bone:marrow activity concentration ratio by 58% in the XCAT simulation.

  10. Outcomes of Multiple Listing for Adult Heart Transplantation in the United States: Analysis of OPTN Data From 2000 to 2013.

    PubMed

    Givens, Raymond C; Dardas, Todd; Clerkin, Kevin J; Restaino, Susan; Schulze, P Christian; Mancini, Donna M

    2015-12-01

    This study sought to assess the association of multiple listing with waitlist outcomes and post-heart transplant (HT) survival. HT candidates in the United States may register at multiple centers. Not all candidates have the resources and mobility needed for multiple listing; thus this policy may advantage wealthier and less sick patients. We identified 33,928 adult candidates for a first single-organ HT between January 1, 2000 and December 31, 2013 in the Organ Procurement and Transplantation Network database. We identified 679 multiple-listed (ML) candidates (2.0%) who were younger (median age, 53 years [interquartile range (IQR): 43 to 60 years] vs. 55 years [IQR: 45 to 61 years]; p < 0.0001), more often white (76.4% vs. 70.7%; p = 0.0010) and privately insured (65.5% vs. 56.3%; p < 0.0001), and lived in zip codes with higher median incomes (US$90,153 [IQR: US$25,471 to US$253,831] vs. US$68,986 [IQR: US$19,471 to US$219,702]; p = 0.0015). Likelihood of ML increased with the primary center's median waiting time. ML candidates had lower initial priority (39.0% 1A or 1B vs. 55.1%; p < 0.0001) and predicted 90-day waitlist mortality (2.9% [IQR: 2.3% to 4.7%] vs. 3.6% [IQR: 2.3% to 6.0]%; p < 0.0001), but were frequently upgraded at secondary centers (58.2% 1A/1B; p < 0.0001 vs. ML primary listing). ML candidates had a higher HT rate (74.4% vs. 70.2%; p = 0.0196) and lower waitlist mortality (8.1% vs. 12.2%; p = 0.0011). Compared with a propensity-matched cohort, the relative ML HT rate was 3.02 (95% confidence interval: 2.59 to 3.52; p < 0.0001). There were no post-HT survival differences. Multiple listing is a rational response to organ shortage but may advantage patients with the means to participate rather than the most medically needy. The multiple-listing policy should be overturned. Copyright © 2015 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  11. Extending the time window for endovascular procedures according to collateral pial circulation.

    PubMed

    Ribo, Marc; Flores, Alan; Rubiera, Marta; Pagola, Jorge; Sargento-Freitas, Joao; Rodriguez-Luna, David; Coscojuela, Pilar; Maisterra, Olga; Piñeiro, Socorro; Romero, Francisco J; Alvarez-Sabin, Jose; Molina, Carlos A

    2011-12-01

    Good collateral pial circulation (CPC) predicts a favorable outcome in patients undergoing intra-arterial procedures. We aimed to determine if CPC status may be used to decide about pursuing recanalization efforts. Pial collateral score (0-5) was determined on initial angiogram. We considered good CPC when pial collateral score<3, defined total time of ischemia (TTI) as onset-to-recanalization time, and clinical improvement>4-point decline in admission-discharge National Institutes of Health Stroke Scale. We studied CPC in 61 patients (31 middle cerebral artery, 30 internal carotid artery). Good CPC patients (n=21 [34%]) had lower discharge National Institutes of Health Stroke Scale score (7 versus 21; P=0.02) and smaller infarcts (56 mL versus 238 mL; P<0.001). In poor CPC patients, a receiver operating characteristic curve defined a TTI cutoff point<300 minutes (sensitivity 67%, specificity 75%) that better predicted clinical improvement (TTI<300: 66.7% versus TTI>300: 25%; P=0.05). For good CPC patients, no temporal cutoff point could be defined. Although clinical improvement was similar for patients recanalizing within 300 minutes (poor CPC: 60% versus good CPC: 85.7%; P=0.35), the likelihood of clinical improvement was 3-fold higher after 300 minutes only in good CPC patients (23.1% versus 90.1%; P=0.01). Similarly, infarct volume was reduced 7-fold in good as compared with poor CPC patients only when TTI>300 minutes (TTI<300: poor CPC: 145 mL versus good CPC: 93 mL; P=0.56 and TTI>300: poor CPC: 217 mL versus good CPC: 33 mL; P<0.01). After adjusting for age and baseline National Institutes of Health Stroke Scale score, TTI<300 emerged as an independent predictor of clinical improvement in poor CPC patients (OR, 6.6; 95% CI, 1.01-44.3; P=0.05) but not in good CPC patients. In a logistic regression, good CPC independently predicted clinical improvement after adjusting for TTI, admission National Institutes of Health Stroke Scale score, and age (OR, 12.5; 95% CI, 1.6-74.8; P=0.016). Good CPC predicts better clinical response to intra-arterial treatment beyond 5 hours from onset. In patients with stroke receiving endovascular treatment, identification of good CPC may help physicians when considering pursuing recanalization efforts in late time windows.

  12. [Analysis of antibiotic diffusion from agarose gel by spectrophotometry and laser interferometry methods].

    PubMed

    Arabski, Michał; Wasik, Sławomir; Piskulak, Patrycja; Góźdź, Natalia; Slezak, Andrzej; Kaca, Wiesław

    2011-01-01

    The aim of this study was to analysis of antibiotics (ampicilin, streptomycin, ciprofloxacin or colistin) release from agarose gel by spectrophotmetry and laser interferometry methods. The interferometric system consisted of a Mach-Zehnder interferometer with a He-Ne laser, TV-CCD camera, computerised data acquisition system and a gel system. The gel system under study consists of two cuvettes. We filled the lower cuvette with an aqueous 1% agarose solution with the antibiotics at initial concentration of antibiotics in the range of 0.12-2 mg/ml for spectrophotmetry analysis or 0.05-0.5 mg/ml for laser interferometry methods, while in the upper cuvette there was pure water. The diffusion was analysed from 120 to 2400 s with a time interval of deltat = 120 s by both methods. We observed that 0.25-1 mg/ml and 0,05 mg/ml are minimal initial concentrations detected by spectrophotometric and laser interferometry methods, respectively. Additionally, we observed differences in kinetic of antibiotic diffusion from gel measured by both methods. In conclusion, the laser interferometric method is a useful tool for studies of antibiotic release from agarose gel, especially for substances are not fully soluble in water, for example: colistin.

  13. Measurement of phospholipids by hydrophilic interaction liquid chromatography coupled to tandem mass spectrometry: the determination of choline containing compounds in foods.

    PubMed

    Zhao, Yuan-Yuan; Xiong, Yeping; Curtis, Jonathan M

    2011-08-12

    A hydrophilic interaction liquid chromatography-tandem mass spectrometry (HILIC LC-MS/MS) method using multiple scan modes was developed to separate and quantify 11 compounds and lipid classes including acetylcholine (AcCho), betaine (Bet), choline (Cho), glycerophosphocholine (GPC), lysophosphatidylcholine (LPC), lysophosphatidylethanolamine (LPE), phosphatidylcholine (PC), phosphatidylethanolamine (PE), phosphatidylinositol (PI), phosphocholine (PCho) and sphingomyelin (SM). This includes all of the major choline-containing compounds found in foods. The method offers advantages over other LC methods since HILIC chromatography is readily compatible with electrospray ionization and results in higher sensitivity and improved peak shapes. The LC-MS/MS method allows quantification of all choline-containing compounds in a single run. Tests of method suitability indicated linear ranges of approximately 0.25-25 μg/ml for PI and PE, 0.5-50 μg/ml for PC, 0.05-5 μg/ml for SM and LPC, 0.5-25 μg/ml for LPE, 0.02-5 μg/ml for Cho, and 0.08-8 μg/ml for Bet, respectively. Accuracies of 83-105% with precisions of 1.6-13.2% RSD were achieved for standards over a wide range of concentrations, demonstrating that this method will be suitable for food analysis. 8 polar lipid classes were found in a lipid extract of egg yolk and different species of the same class were differentiated based on their molecular weights and fragment ion information. PC and PE were found to be the most abundant lipid classes consisting of 71% and 18% of the total phospholipids in egg yolk. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  15. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  16. Development and validation of multivariate calibration methods for simultaneous estimation of Paracetamol, Enalapril maleate and hydrochlorothiazide in pharmaceutical dosage form

    NASA Astrophysics Data System (ADS)

    Singh, Veena D.; Daharwal, Sanjay J.

    2017-01-01

    Three multivariate calibration spectrophotometric methods were developed for simultaneous estimation of Paracetamol (PARA), Enalapril maleate (ENM) and Hydrochlorothiazide (HCTZ) in tablet dosage form; namely multi-linear regression calibration (MLRC), trilinear regression calibration method (TLRC) and classical least square (CLS) method. The selectivity of the proposed methods were studied by analyzing the laboratory prepared ternary mixture and successfully applied in their combined dosage form. The proposed methods were validated as per ICH guidelines and good accuracy; precision and specificity were confirmed within the concentration range of 5-35 μg mL- 1, 5-40 μg mL- 1 and 5-40 μg mL- 1of PARA, HCTZ and ENM, respectively. The results were statistically compared with reported HPLC method. Thus, the proposed methods can be effectively useful for the routine quality control analysis of these drugs in commercial tablet dosage form.

  17. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  18. Measurement of the Top Quark Mass by Dynamical Likelihood Method using the Lepton plus Jets Events in 1.96 Tev Proton-Antiproton Collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yorita, Kohei

    2005-03-01

    We have measured the top quark mass with the dynamical likelihood method (DLM) using the CDF II detector at the Fermilab Tevatron. The Tevatron produces top and anti-top pairs in pp collisions at a center of mass energy of 1.96 TeV. The data sample used in this paper was accumulated from March 2002 through August 2003 which corresponds to an integrated luminosity of 162 pb -1.

  19. Oxyrase, a method which avoids CO2 in the incubation atmosphere for anaerobic susceptibility testing of antibiotics affected by CO2.

    PubMed

    Spangler, S K; Appelbaum, P C

    1993-02-01

    The Oxyrase agar dilution method, with exclusion of CO2 from the environment, was compared with the reference agar dilution method recommended by the National Committee for Clinical Laboratory Standards (anaerobic chamber with 10% CO2) to test the susceptibility of 51 gram-negative and 43 gram-positive anaerobes to azithromycin and erythromycin. With the Oxyrase method, anaerobiosis was achieved by incorporation of the O2-binding enzyme Oxyrase in addition to susceptibility test medium, antibiotic, and enzyme substrates into the upper level of a biplate. Plates were covered with a Brewer lid and incubated in ambient air. With azithromycin, Oxyrase yielded an MIC for 50% of strains tested (MIC50) and MIC90 of 2.0 and 8.0 micrograms/ml, compared to 8.0 and > 32.0 micrograms/ml in standard anaerobic conditions. At a breakpoint of 8.0 micrograms/ml, 90.4% of strains were susceptible to azithromycin with Oxyrase, compared to 53.2% in the chamber. The corresponding erythromycin MIC50 and MIC90 were 1.0 and 8.0 micrograms/ml with Oxyrase, compared to 4.0 and > 32.0 micrograms/ml by the reference method, with 89.3% of strains susceptible at a breakpoint of 4 micrograms/ml with Oxyrase, compared to 60.6% in CO2. Exclusion of CO2 from the anaerobic atmosphere when testing for susceptibility to azalides and macrolides yielded lower MICs, which may lead to a reconsideration of the role played by these compounds in treatment of infections caused by these strains.

  20. An evaluation of percentile and maximum likelihood estimators of weibull paremeters

    Treesearch

    Stanley J. Zarnoch; Tommy R. Dell

    1985-01-01

    Two methods of estimating the three-parameter Weibull distribution were evaluated by computer simulation and field data comparison. Maximum likelihood estimators (MLB) with bias correction were calculated with the computer routine FITTER (Bailey 1974); percentile estimators (PCT) were those proposed by Zanakis (1979). The MLB estimators had superior smaller bias and...

Top