Sample records for likelihood based simultaneous

  1. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    NASA Astrophysics Data System (ADS)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  2. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  3. What affects public acceptance of recycled and desalinated water?

    PubMed Central

    Dolnicar, Sara; Hurlimann, Anna; Grün, Bettina

    2011-01-01

    This paper identifies factors that are associated with higher levels of public acceptance for recycled and desalinated water. For the first time, a wide range of hypothesized factors, both of socio-demographic and psychographic nature, are included simultaneously. The key results, based on a survey study of about 3000 respondents are that: (1) drivers of the stated likelihood of using desalinated water differ somewhat from drivers of the stated likelihood of using recycled water; (2) positive perceptions of, and knowledge about, the respective water source are key drivers for the stated likelihood of usage; and (3) awareness of water scarcity, as well as prior experience with using water from alternative sources, increases the stated likelihood of use. Practical recommendations for public policy makers, such as key messages to be communicated to the public, are derived. PMID:20950834

  4. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments

    USDA-ARS?s Scientific Manuscript database

    Measures of animal movement versus consumption rates can provide valuable, ecologically relevant information on feeding preference, specifically estimates of attraction rate, leaving rate, tenure time, or measures of flight/walking path. Here, we develop a simple biostatistical model to analyze repe...

  5. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  6. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.

    PubMed

    Dawson, Michael R W; Gupta, Maya

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent's environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned.

  7. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability

    PubMed Central

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent’s environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned. PMID:28212422

  8. Varied applications of a new maximum-likelihood code with complete covariance capability. [FERRET, for data adjustment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1978-01-01

    Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed:more » A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures.« less

  9. Robust generative asymmetric GMM for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  11. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  12. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  13. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  14. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  15. A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    NASA Astrophysics Data System (ADS)

    Kieseler, Jan

    2017-11-01

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.

  16. Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality.

    PubMed

    Malle, Bertram F; Holbrook, Jess

    2012-04-01

    People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.

  17. Quantifying rainfall-derived inflow and infiltration in sanitary sewer systems based on conductivity monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Mingkai; Liu, Yanchen; Cheng, Xun; Zhu, David Z.; Shi, Hanchang; Yuan, Zhiguo

    2018-03-01

    Quantifying rainfall-derived inflow and infiltration (RDII) in a sanitary sewer is difficult when RDII and overflow occur simultaneously. This study proposes a novel conductivity-based method for estimating RDII. The method separately decomposes rainfall-derived inflow (RDI) and rainfall-induced infiltration (RII) on the basis of conductivity data. Fast Fourier transform was adopted to analyze variations in the flow and water quality during dry weather. Nonlinear curve fitting based on the least squares algorithm was used to optimize parameters in the proposed RDII model. The method was successfully applied to real-life case studies, in which inflow and infiltration were successfully estimated for three typical rainfall events with total rainfall volumes of 6.25 mm (light), 28.15 mm (medium), and 178 mm (heavy). Uncertainties of model parameters were estimated using the generalized likelihood uncertainty estimation (GLUE) method and were found to be acceptable. Compared with traditional flow-based methods, the proposed approach exhibits distinct advantages in estimating RDII and overflow, particularly when the two processes happen simultaneously.

  18. Timing and Cue Competition in Conditioning of the Nictitating Membrane Response of the Rabbit ("Oryctolagus Cuniculus")

    ERIC Educational Resources Information Center

    Kehoe, E. James; Ludvig, Elliot A.; Sutton, Richard S.

    2013-01-01

    Rabbits were classically conditioned using compounds of tone and light conditioned stimuli (CSs) presented with either simultaneous onsets (Experiment 1) or serial onsets (Experiment 2) in a delay conditioning paradigm. Training with the simultaneous compound reduced the likelihood of a conditioned response (CR) to the individual CSs ("mutual…

  19. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  20. Copula Regression Analysis of Simultaneously Recorded Frontal Eye Field and Inferotemporal Spiking Activity during Object-Based Working Memory

    PubMed Central

    Hu, Meng; Clark, Kelsey L.; Gong, Xiajing; Noudoost, Behrad; Li, Mingyao; Moore, Tirin

    2015-01-01

    Inferotemporal (IT) neurons are known to exhibit persistent, stimulus-selective activity during the delay period of object-based working memory tasks. Frontal eye field (FEF) neurons show robust, spatially selective delay period activity during memory-guided saccade tasks. We present a copula regression paradigm to examine neural interaction of these two types of signals between areas IT and FEF of the monkey during a working memory task. This paradigm is based on copula models that can account for both marginal distribution over spiking activity of individual neurons within each area and joint distribution over ensemble activity of neurons between areas. Considering the popular GLMs as marginal models, we developed a general and flexible likelihood framework that uses the copula to integrate separate GLMs into a joint regression analysis. Such joint analysis essentially leads to a multivariate analog of the marginal GLM theory and hence efficient model estimation. In addition, we show that Granger causality between spike trains can be readily assessed via the likelihood ratio statistic. The performance of this method is validated by extensive simulations, and compared favorably to the widely used GLMs. When applied to spiking activity of simultaneously recorded FEF and IT neurons during working memory task, we observed significant Granger causality influence from FEF to IT, but not in the opposite direction, suggesting the role of the FEF in the selection and retention of visual information during working memory. The copula model has the potential to provide unique neurophysiological insights about network properties of the brain. PMID:26063909

  1. The Link Between Community-Based Violence and Intimate Partner Violence: the Effect of Crime and Male Aggression on Intimate Partner Violence Against Women.

    PubMed

    Kiss, Ligia; Schraiber, Lilia Blima; Hossain, Mazeda; Watts, Charlotte; Zimmerman, Cathy

    2015-08-01

    Both intimate partner violence (IPV) and community violence are prevalent globally, and each is associated with serious health consequences. However, little is known about their potential links or the possible benefits of coordinated prevention strategies. Using aggregated data on community violence from the São Paulo State Security Department (INFOCRIM) merged with WHO multi-country study on women's health and domestic violence data, random intercept models were created to assess the effect of crime on women's probability of experiencing IPV. The association between IPV and male aggression (measured by women's reports of their partner's fights with other men) was examined using logistic regression models. We found little variation in the likelihood of male IPV perpetration related to neighborhood crime level but did find an increased likelihood of IPV experiences among women whose partners were involved in male-to-male violence. Emerging evidence on violence prevention has suggested some promising avenues for primary prevention that address common risk factors for both perpetration of IPV and male interpersonal violence. Strategies such as early identification and effective treatment of emotional disorders, alcohol abuse prevention and treatment, complex community-based interventions to change gender social norms and social marketing campaigns designed to modify social and cultural norms that support violence may work to prevent simultaneously male-on-male aggression and IPV. Future evaluations of these prevention strategies should simultaneously assess the impact of interventions on IPV and male interpersonal aggression.

  2. Hyperspectral image reconstruction for x-ray fluorescence tomography

    DOE PAGES

    Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...

    2015-01-01

    A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less

  3. Conflict effects without conflict in anterior cingulate cortex: multiple response effects and context specific representations

    PubMed Central

    Brown, Joshua W.

    2009-01-01

    The error likelihood computational model of anterior cingulate cortex (ACC) (Brown & Braver, 2005) has successfully predicted error likelihood effects, risk prediction effects, and how individual differences in conflict and error likelihood effects vary with trait differences in risk aversion. The same computational model now makes a further prediction that apparent conflict effects in ACC may result in part from an increasing number of simultaneously active responses, regardless of whether or not the cued responses are mutually incompatible. In Experiment 1, the model prediction was tested with a modification of the Eriksen flanker task, in which some task conditions require two otherwise mutually incompatible responses to be generated simultaneously. In that case, the two response processes are no longer in conflict with each other. The results showed small but significant medial PFC effects in the incongruent vs. congruent contrast, despite the absence of response conflict, consistent with model predictions. This is the multiple response effect. Nonetheless, actual response conflict led to greater ACC activation, suggesting that conflict effects are specific to particular task contexts. In Experiment 2, results from a change signal task suggested that the context dependence of conflict signals does not depend on error likelihood effects. Instead, inputs to ACC may reflect complex and task specific representations of motor acts, such as bimanual responses. Overall, the results suggest the existence of a richer set of motor signals monitored by medial PFC and are consistent with distinct effects of multiple responses, conflict, and error likelihood in medial PFC. PMID:19375509

  4. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-04-29

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection.

  5. 8D likelihood effective Higgs couplings extraction framework in h → 4ℓ

    DOE PAGES

    Chen, Yi; Di Marco, Emanuele; Lykken, Joe; ...

    2015-01-23

    We present an overview of a comprehensive analysis framework aimed at performing direct extraction of all possible effective Higgs couplings to neutral electroweak gauge bosons in the decay to electrons and muons, the so called ‘golden channel’. Our framework is based primarily on a maximum likelihood method constructed from analytic expressions of the fully differential cross sections for h → 4l and for the dominant irreduciblemore » $$ q\\overline{q} $$ → 4l background, where 4l = 2e2μ, 4e, 4μ. Detector effects are included by an explicit convolution of these analytic expressions with the appropriate transfer function over all center of mass variables. Utilizing the full set of observables, we construct an unbinned detector-level likelihood which is continuous in the effective couplings. We consider possible ZZ, Zγ, and γγ couplings simultaneously, allowing for general CP odd/even admixtures. A broad overview is given of how the convolution is performed and we discuss the principles and theoretical basis of the framework. This framework can be used in a variety of ways to study Higgs couplings in the golden channel using data obtained at the LHC and other future colliders.« less

  6. Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.

    DTIC Science & Technology

    1982-12-20

    of Likelihood Criteria for I)fferent Hypotheses," in P. A. Krishnaiah (Ed.), Multivariate Analysis-Il, New York: Academic Press. [5] Fisher, R. A...Methods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), rultivariate Analysis-Il, New York: Academic Press. [8) Kendall, M. G. (1966...1982), Applied Multivariate Statisti- cal-Analysis, Englewood Cliffs: Prentice-Mall, Inc. [1U] Krishnaiah , P. R. (1969), "Simultaneous Test

  7. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  8. Quantifying (dis)agreement between direct detection experiments in a halo-independent way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk

    We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less

  9. Modeling, estimation and identification methods for static shape determination of flexible structures. [for large space structure design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.

  10. Insights into energy delivery to myocardial tissue during radiofrequency ablation through application of the first law of thermodynamics.

    PubMed

    Bunch, T Jared; Day, John D; Packer, Douglas L

    2009-04-01

    The approach to catheter-based radiofrequency ablation of atrial fibrillation has evolved, and as a consequence, more energy is delivered in the posterior left atrium, exposing neighboring tissue to untoward thermal injury. Simultaneously, catheter technology has advanced to allow more efficient energy delivery into the myocardium, which compounds the likelihood of collateral injury. This review focuses on the basic principles of thermodynamics as they apply to energy delivery during radiofrequency ablation. These principles can be used to titrate energy delivery and plan ablative approaches in an effort to minimize complications during the procedure.

  11. A decision directed detector for the phase incoherent Gaussian channel

    NASA Technical Reports Server (NTRS)

    Kazakos, D.

    1975-01-01

    A vector digital signalling scheme is proposed for simultaneous adaptive data transmission and phase estimation. The use of maximum likelihood estimation methods predicts a better performance than the phase-locked loop. The phase estimate is shown to converge to the true value, so that the adaptive nature of the detector effectively achieves phase acquisition and improvement in performance. No separate synchronization interval is required and phase fluctuations can be tracked simultaneously with the transmission of information.

  12. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    PubMed

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  13. Attenuation correction in emission tomography using the emission data—A review

    PubMed Central

    Li, Yusheng

    2016-01-01

    The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging. PMID:26843243

  14. Attenuation correction in emission tomography using the emission data—A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berker, Yannick, E-mail: berker@mail.med.upenn.edu; Li, Yusheng

    2016-02-15

    The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors thenmore » look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging.« less

  15. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  16. A parimutuel gambling perspective to compare probabilistic seismicity forecasts

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2014-10-01

    Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.

  17. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  18. PACM: A Two-Stage Procedure for Analyzing Structural Models.

    ERIC Educational Resources Information Center

    Lehmann, Donald R.; Gupta, Sunil

    1989-01-01

    Path Analysis of Covariance Matrix (PACM) is described as a way to separately estimate measurement and structural models using standard least squares procedures. PACM was empirically compared to simultaneous maximum likelihood estimation and use of the LISREL computer program, and its advantages are identified. (SLD)

  19. A general model for likelihood computations of genetic marker data accounting for linkage, linkage disequilibrium, and mutations.

    PubMed

    Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter

    2015-09-01

    Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.

  20. Usefulness of the HMRPGV method for simultaneous selection of upland cotton genotypes with greater fiber length and high yield stability.

    PubMed

    Farias, F J C; Carvalho, L P; Silva Filho, J L; Teodoro, P E

    2016-08-19

    The harmonic mean of the relative performance of genotypic predicted value (HMRPGV) method has been used to measure the genotypic stability and adaptability of various crops. However, its use in cotton is still restricted. This study aimed to use mixed models to select cotton genotypes that simultaneously result in longer fiber length, higher fiber yield, and phenotypic stability in both of these traits. Eight trials with 16 cotton genotypes were conducted in the 2008/2009 harvest in Mato Grosso State. The experimental design was randomized complete blocks with four replicates of each of the 16 genotypes. In each trial, we evaluated fiber yield and fiber length. The genetic parameters were estimated using the restricted maximum likelihood/best linear unbiased predictor method. Joint selection considering, simultaneously, fiber length, fiber yield, stability, and adaptability is possible with the HMRPGV method. Our results suggested that genotypes CNPA MT 04 2080 and BRS CEDRO may be grown in environments similar to those tested here and may be predicted to result in greater fiber length, fiber yield, adaptability, and phenotypic stability. These genotypes may constitute a promising population base in breeding programs aimed at increasing these trait values.

  1. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  2. Hearing loss and disability exit: Measurement issues and coping strategies.

    PubMed

    Christensen, Vibeke Tornhøj; Datta Gupta, Nabanita

    2017-02-01

    Hearing loss is one of the most common conditions related to aging, and previous descriptive evidence links it to early exit from the labor market. These studies are usually based on self-reported hearing difficulties, which are potentially endogenous to labor supply. We use unique representative data collected in the spring of 2005 through in-home interviews. The data contains self-reported functional and clinically-measured hearing ability for a representative sample of the Danish population aged 50-64. We estimate the causal effect of hearing loss on early retirement via disability benefits, taking into account the endogeneity of functional hearing. Our identification strategy involves the simultaneous estimation of labor supply, functional hearing, and coping strategies (i.e. accessing assistive devices at work or informing one's employer about the problem). We use hearing aids as an instrument for functional hearing. Our main empirical findings are that endogeneity bias is more severe for men than women and that functional hearing problems significantly increase the likelihood of receiving disability benefits for both men and women. However, relative to the baseline the effect is larger for men (47% vs. 20%, respectively). Availability of assistive devices in the workplace decreases the likelihood of receiving disability benefits, whereas informing an employer about hearing problems increases this likelihood. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing

    PubMed Central

    Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.

    2007-01-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837

  4. MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.

    PubMed

    Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar

    2013-10-15

    High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.

  5. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  6. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Non-linear auto-regressive models for cross-frequency coupling in neural time series

    PubMed Central

    Tallot, Lucille; Grabot, Laetitia; Doyère, Valérie; Grenier, Yves; Gramfort, Alexandre

    2017-01-01

    We address the issue of reliably detecting and quantifying cross-frequency coupling (CFC) in neural time series. Based on non-linear auto-regressive models, the proposed method provides a generative and parametric model of the time-varying spectral content of the signals. As this method models the entire spectrum simultaneously, it avoids the pitfalls related to incorrect filtering or the use of the Hilbert transform on wide-band signals. As the model is probabilistic, it also provides a score of the model “goodness of fit” via the likelihood, enabling easy and legitimate model selection and parameter comparison; this data-driven feature is unique to our model-based approach. Using three datasets obtained with invasive neurophysiological recordings in humans and rodents, we demonstrate that these models are able to replicate previous results obtained with other metrics, but also reveal new insights such as the influence of the amplitude of the slow oscillation. Using simulations, we demonstrate that our parametric method can reveal neural couplings with shorter signals than non-parametric methods. We also show how the likelihood can be used to find optimal filtering parameters, suggesting new properties on the spectrum of the driving signal, but also to estimate the optimal delay between the coupled signals, enabling a directionality estimation in the coupling. PMID:29227989

  8. Competition, Speculative Risks, and IT Security Outsourcing

    NASA Astrophysics Data System (ADS)

    Cezar, Asunur; Cavusoglu, Huseyin; Raghunathan, Srinivasan

    Information security management is becoming a more critical and, simultaneously, a challenging function for many firms. Even though many security managers are skeptical about outsourcing of IT security, others have cited reasons that are used for outsourcing of traditional IT functions for why security outsourcing is likely to increase. Our research offers a novel explanation, based on competitive externalities associated with IT security, for firms' decisions to outsource IT security. We show that if competitive externalities are ignored, then a firm will outsource security if and only if the MSSP offers a quality (or a cost) advantage over in-house operations, which is consistent with the traditional explanation for security outsourcing. However, a higher quality is neither a prerequisite nor a guarantee for a firm to outsource security. The competitive risk environment and the nature of the security function outsourced, in addition to quality, determine firms' outsourcing decisions. If the reward from the competitor's breach is higher than the loss from own breach, then even if the likelihood of a breach is higher under the MSSP the expected benefit from the competitive demand externality may offset the loss from the higher likelihood of breaches, resulting in one or both firms outsourcing security. The incentive to outsource security monitoring is higher than that of infrastructure management because the MSSP can reduce the likelihood of breach on both firms and thus enhance the demand externality effect. The incentive to outsource security monitoring (infrastructure management) is higher (lower) if either the likelihood of breach on both firms is lower (higher) when security is outsourced or the benefit (relative to loss) from the externality is higher (lower). The benefit from the demand externality arising out of a security breach is higher when more of the customers that leave the breached firm switch to the non-breached firm.

  9. Parent Relationship Quality Buffers against the Effect of Peer Stressors on Depressive Symptoms from Middle Childhood to Adolescence

    ERIC Educational Resources Information Center

    Hazel, Nicholas A.; Oppenheimer, Caroline W.; Technow, Jessica R.; Young, Jami F.; Hankin, Benjamin L.

    2014-01-01

    During the transition to adolescence, several developmental trends converge to increase the importance of peer relationships, the likelihood of peer-related stressors, and the experience of depressive symptoms. Simultaneously, there are significant changes in parent-child relationships. The current study sought to evaluate whether positive…

  10. Cybersecurity Regulation of Wireless Devices for Performance and Assurance in the Age of “Medjacking”

    PubMed Central

    Armstrong, David G.; Kleidermacher, David N.; Klonoff, David C.; Slepian, Marvin J.

    2015-01-01

    We are rapidly reaching a point where, as connected devices for monitoring and treating diabetes and other diseases become more pervasive and powerful, the likelihood of malicious medical device hacking (known as “medjacking”) is growing. While government could increase regulation, we have all been witness in recent times to the limitations and issues surrounding exclusive reliance on government. Herein we outline a preliminary framework for establishing security for wireless health devices based on international common criteria. Creation of an independent medical device cybersecurity body is suggested. The goal is to allow for continued growth and innovation while simultaneously fostering security, public trust, and confidence. PMID:26319227

  11. Cybersecurity Regulation of Wireless Devices for Performance and Assurance in the Age of "Medjacking".

    PubMed

    Armstrong, David G; Kleidermacher, David N; Klonoff, David C; Slepian, Marvin J

    2015-08-27

    We are rapidly reaching a point where, as connected devices for monitoring and treating diabetes and other diseases become more pervasive and powerful, the likelihood of malicious medical device hacking (known as "medjacking") is growing. While government could increase regulation, we have all been witness in recent times to the limitations and issues surrounding exclusive reliance on government. Herein we outline a preliminary framework for establishing security for wireless health devices based on international common criteria. Creation of an independent medical device cybersecurity body is suggested. The goal is to allow for continued growth and innovation while simultaneously fostering security, public trust, and confidence. © 2015 Diabetes Technology Society.

  12. A Global Carbon Assimilation System using a modified EnKF assimilation method

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Zheng, X.; Chen, Z.; Dan, B.; Chen, J. M.; Yi, X.; Wang, L.; Wu, G.

    2014-10-01

    A Global Carbon Assimilation System based on Ensemble Kalman filter (GCAS-EK) is developed for assimilating atmospheric CO2 abundance data into an ecosystem model to simultaneously estimate the surface carbon fluxes and atmospheric CO2 distribution. This assimilation approach is based on the ensemble Kalman filter (EnKF), but with several new developments, including using analysis states to iteratively estimate ensemble forecast errors, and a maximum likelihood estimation of the inflation factors of the forecast and observation errors. The proposed assimilation approach is tested in observing system simulation experiments and then used to estimate the terrestrial ecosystem carbon fluxes and atmospheric CO2 distributions from 2002 to 2008. The results showed that this assimilation approach can effectively reduce the biases and uncertainties of the carbon fluxes simulated by the ecosystem model.

  13. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  14. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  15. Learn-as-you-go acceleration of cosmological parameter estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less

  16. Towards a novel look on low-frequency climate reconstructions

    NASA Astrophysics Data System (ADS)

    Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti

    2010-05-01

    Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.

  17. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  18. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  19. Adolescents Exiting Homelessness Over Two Years: The Risk Amplification and Abatement Model

    PubMed Central

    Milburn, Norweeta G.; Rice, Eric; Rotheram-Borus, Mary Jane; Mallett, Shelley; Rosenthal, Doreen; Batterham, Phillip; May, Susanne J.; Witkin, Andrea; Duan, Naihua

    2014-01-01

    The Risk Amplification and Abatement Model (RAAM), demonstrates that negative contact with socializing agents amplify risk, while positive contact abates risk for homeless adolescents. To test this model, the likelihood of exiting homelessness and returning to familial housing at 2 years and stably exiting over time are examined with longitudinal data collected from 183 newly homeless adolescents followed over 2 years in Los Angeles, CA. In support of RAAM, unadjusted odds of exiting at 2 years and stably exiting over2 years revealed that engagement with pro-social peers, maternal social support, and continued school attendance all promoted exiting behaviors. Simultaneously, exposure to family violence and reliance on shelter services discouraged stably exiting behaviors. Implications for family-based interventions are proposed. PMID:25067896

  20. Model-based tomographic reconstruction of objects containing known components.

    PubMed

    Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H

    2012-10-01

    The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.

  1. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm. CRESST Report 834

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2013-01-01

    In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…

  2. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2014-01-01

    In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…

  3. Traffic environment and demographic factors affecting impaired driving and crashes

    PubMed Central

    Romano, Eduardo O.; Peck, Raymond C.; Voas, Robert B.

    2012-01-01

    Introduction Data availability has forced researchers to examine separately the role of alcohol among drivers who crashed and drivers who did not crash. Such a separation fails to account fully for the transition from impaired driving to an alcohol-related crash. Method In this study, we analyzed recent data to investigate how traffic-related environments, conditions, and drivers’ demographics shape the likelihood of a driver being either involved in a crash (alcohol impaired or not) or not involved in a crash (alcohol impaired or not). Our data, from a recent case–control study, included a comprehensive sampling of the drivers in nonfatal crashes and a matched set of comparison drivers in two U.S. locations. Multinomial logistic regression was applied to investigate the likelihood that a driver would crash or would not crash, either with a blood alcohol concentration (BAC)=.00 or with a BAC≥.05. Conclusions To our knowledge, this study is the first to examine how different driver characteristics and environmental factors simultaneously contribute to alcohol use by crash-involved and non-crash-involved drivers. This effort calls attention to the need for research on the simultaneous roles played by all the factors that may contribute to motor vehicle crashes. PMID:22385743

  4. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  5. Sequential interactions-in which one player plays first and another responds-promote cooperation in evolutionary-dynamical simulations of single-shot Prisoner's Dilemma and Snowdrift games.

    PubMed

    Laird, Robert A

    2018-09-07

    Cooperation is a central topic in evolutionary biology because (a) it is difficult to reconcile why individuals would act in a way that benefits others if such action is costly to themselves, and (b) it underpins many of the 'major transitions of evolution', making it essential for explaining the origins of successively higher levels of biological organization. Within evolutionary game theory, the Prisoner's Dilemma and Snowdrift games are the main theoretical constructs used to study the evolution of cooperation in dyadic interactions. In single-shot versions of these games, wherein individuals play each other only once, players typically act simultaneously rather than sequentially. Allowing one player to respond to the actions of its co-player-in the absence of any possibility of the responder being rewarded for cooperation or punished for defection, as in simultaneous or sequential iterated games-may seem to invite more incentive for exploitation and retaliation in single-shot games, compared to when interactions occur simultaneously, thereby reducing the likelihood that cooperative strategies can thrive. To the contrary, I use lattice-based, evolutionary-dynamical simulation models of single-shot games to demonstrate that under many conditions, sequential interactions have the potential to enhance unilaterally or mutually cooperative outcomes and increase the average payoff of populations, relative to simultaneous interactions-benefits that are especially prevalent in a spatially explicit context. This surprising result is attributable to the presence of conditional strategies that emerge in sequential games that can't occur in the corresponding simultaneous versions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    PubMed

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  7. Hidden Markov model tracking of continuous gravitational waves from young supernova remnants

    NASA Astrophysics Data System (ADS)

    Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.

    2018-02-01

    Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.

  8. The Problem of Dualism in Modern Western Medicine

    PubMed Central

    Gendle, Mathew H.

    2016-01-01

    Dualism is historically important in that it allowed the medical practice to be divorced from church oversight. The reductionist approaches of modern Western medicine facilitate a dispassionate and mechanistic approach to patient care, and dualist views promoted by complementary and alternative medicine are also problematic. Behavioural disorders are multifactorally realizable and emerge apparently chaotically from interactions between internal physiological systems and the patient's environment and experiential history. Conceptualizations of behavioural disorders that are based on dualism deny the primacy of individual physiology in the generation of pathology and distract from therapies that are most likely to produce positive outcomes. Behavioural health professionals should adopt holistic models of patient care, but these models must be based on methodologies that emphasize radical emergence over the artificial separation of the “physical” and “mental.” This will allow for the humanistic practice of medicine while simultaneously maximizing the likelihood of treatment success. PMID:28031628

  9. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  10. Competing Forces of Socioeconomic Development and Environmental Degradation on Health and Happiness for Different Income Groups in China.

    PubMed

    Gu, Lijuan; Rosenberg, Mark W; Zeng, Juxin

    2017-10-01

    China's rapid socioeconomic growth in recent years and the simultaneous increase in many forms of pollution are generating contradictory pictures of residents' well-being. This paper applies multilevel analysis to the 2013 China General Social Survey data on social development and health to understand this twofold phenomenon. Multilevel models are developed to investigate the impact of socioeconomic development and environmental degradation on self-reported health (SRH) and self-reported happiness (SRHP), differentiating among lower, middle, and higher income groups. The results of the logit multilevel analysis demonstrate that income, jobs, and education increased the likelihood of rating SRH and SRHP positively for the lower and middle groups but had little or no effect on the higher income group. Having basic health insurance had an insignificant effect on health but increased the likelihood of happiness among the lower income group. Provincial-level pollutants were associated with a higher likelihood of good health for all income groups, and community-level industrial pollutants increased the likelihood of good health for the lower and middle income groups. Measures of community-level pollution were robust predictors of the likelihood of unhappiness among the lower and middle income groups. Environmental hazards had a mediating effect on the relationship between socioeconomic development and health, and socioeconomic development strengthened the association between environmental hazards and happiness. These outcomes indicate that the complex interconnections among socioeconomic development and environmental degradation have differential effects on well-being among different income groups in China.

  11. The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data

    NASA Technical Reports Server (NTRS)

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; hide

    2013-01-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation

  12. 49 CFR Appendix A to Part 211 - Statement of Agency Policy Concerning Waivers Related to Shared Use of Trackage or Rights-of-Way...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... not designed to be used in situations where there is a reasonable likelihood of a collision with much... rail crossing at grade, a shared method of train control, or shared highway-rail grade crossings. 4.... You should explain the nature of such simultaneous joint use, the system of train control, the...

  13. PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.

    PubMed

    Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan

    2015-05-01

    The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution.

    PubMed

    Gangnon, Ronald E

    2012-03-01

    The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, whereas rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. © 2011, The International Biometric Society.

  15. Local multiplicity adjustment for the spatial scan statistic using the Gumbel distribution

    PubMed Central

    Gangnon, Ronald E.

    2011-01-01

    Summary The spatial scan statistic is an important and widely used tool for cluster detection. It is based on the simultaneous evaluation of the statistical significance of the maximum likelihood ratio test statistic over a large collection of potential clusters. In most cluster detection problems, there is variation in the extent of local multiplicity across the study region. For example, using a fixed maximum geographic radius for clusters, urban areas typically have many overlapping potential clusters, while rural areas have relatively few. The spatial scan statistic does not account for local multiplicity variation. We describe a previously proposed local multiplicity adjustment based on a nested Bonferroni correction and propose a novel adjustment based on a Gumbel distribution approximation to the distribution of a local scan statistic. We compare the performance of all three statistics in terms of power and a novel unbiased cluster detection criterion. These methods are then applied to the well-known New York leukemia dataset and a Wisconsin breast cancer incidence dataset. PMID:21762118

  16. Predicting likelihood of seeking help through the employee assistance program among salaried and union hourly employees.

    PubMed

    Delaney, W; Grube, J W; Ames, G M

    1998-03-01

    This research investigated belief, social support and background predictors of employee likelihood to use an Employee Assistance Program (EAP) for a drinking problem. An anonymous cross-sectional survey was administered in the home. Bivariate analyses and simultaneous equations path analysis were used to explore a model of EAP use. Survey and ethnographic research were conducted in a unionized heavy machinery manufacturing plant in the central states of the United States. A random sample of 852 hourly and salaried employees was selected. In addition to background variables, measures included: likelihood of going to an EAP for a drinking problem, belief the EAP can help, social support for the EAP from co-workers/others, belief that EAP use will harm employment, and supervisor encourages the EAP for potential drinking problems. Belief in EAP efficacy directly increased the likelihood of going to an EAP. Greater perceived social support and supervisor encouragement increased the likelihood of going to an EAP both directly and indirectly through perceived EAP efficacy. Black and union hourly employees were more likely to say they would use an EAP. Males and those who reported drinking during working hours were less likely to say they would use an EAP for a drinking problem. EAP beliefs and social support have significant effects on likelihood to go to an EAP for a drinking problem. EAPs may wish to focus their efforts on creating an environment where there is social support from coworkers and encouragement from supervisors for using EAP services. Union networks and team members have an important role to play in addition to conventional supervisor intervention.

  17. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  18. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  19. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  20. Empirical Profiles of Alcohol and Marijuana Use, Drugged Driving, and Risk Perceptions.

    PubMed

    Arterberry, Brooke J; Treloar, Hayley; McCarthy, Denis M

    2017-11-01

    The present study sought to inform models of risk for drugged driving through empirically identifying patterns of marijuana use, alcohol use, and related driving behaviors. Perceived dangerousness and consequences of drugged driving were evaluated as putative influences on risk patterns. We used latent profile analysis of survey responses from 897 college students to identify patterns of substance use and drugged driving. We tested the hypotheses that low perceived danger and low perceived likelihood of negative consequences of drugged driving would identify individuals with higher-risk patterns. Findings from the latent profile analysis indicated that a four-profile model provided the best model fit. Low-level engagers had low rates of substance use and drugged driving. Alcohol-centric engagers had higher rates of alcohol use but low rates of marijuana/simultaneous use and low rates of driving after substance use. Concurrent engagers had higher rates of marijuana and alcohol use, simultaneous use, and related driving behaviors, but marijuana-centric/simultaneous engagers had the highest rates of marijuana use, co-use, and related driving behaviors. Those with higher perceived danger of driving while high were more likely to be in the low-level, alcohol-centric, or concurrent engagers' profiles; individuals with higher perceived likelihood of consequences of driving while high were more likely to be in the low-level engagers group. Findings suggested that college students' perceived dangerousness of driving after using marijuana had greater influence on drugged driving behaviors than alcohol-related driving risk perceptions. These results support targeting marijuana-impaired driving risk perceptions in young adult intervention programs.

  1. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    PubMed

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Selective Overview of Variable Selection in High Dimensional Feature Space

    PubMed Central

    Fan, Jianqing

    2010-01-01

    High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976

  3. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    PubMed

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  4. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  5. Parameter identifiability and regional calibration for reservoir inflow prediction

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Engeland, Kolbjørn; Tøfte, Lena S.; Bruland, Oddbjørn

    2013-04-01

    The large hydropower producer Statkraft is currently testing regional, distributed models for operational reservoir inflow prediction. The need for simultaneous forecasts and consistent updating in a large number of catchments supports the shift from catchment-oriented to regional models. Low-quality naturalized inflow series in the reservoir catchments further encourages the use of donor catchments and regional simulation for calibration purposes. MCMC based parameter estimation (the Dream algorithm; Vrugt et al, 2009) is adapted to regional parameter estimation, and implemented within the open source ENKI framework. The likelihood is based on the concept of effectively independent number of observations, spatially as well as in time. Marginal and conditional (around an optimum) parameter distributions for each catchment may be extracted, even though the MCMC algorithm itself is guided only by the regional likelihood surface. Early results indicate that the average performance loss associated with regional calibration (difference in Nash-Sutcliffe R2 between regionally and locally optimal parameters) is in the range of 0.06. The importance of the seasonal snow storage and melt in Norwegian mountain catchments probably contributes to the high degree of similarity among catchments. The evaluation continues for several regions, focusing on posterior parameter uncertainty and identifiability. Vrugt, J. A., C. J. F. ter Braak, C. G. H. Diks, B. A. Robinson, J. M. Hyman and D. Higdon: Accelerating Markov Chain Monte Carlo Simulation by Differential Evolution with Self-Adaptive Randomized Subspace Sampling. Int. J. of nonlinear sciences and numerical simulation 10, 3, 273-290, 2009.

  6. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments.

    PubMed

    Zeilinger, Adam R; Olson, Dawn M; Andow, David A

    2014-08-01

    Consumer feeding preference among resource choices has critical implications for basic ecological and evolutionary processes, and can be highly relevant to applied problems such as ecological risk assessment and invasion biology. Within consumer choice experiments, also known as feeding preference or cafeteria experiments, measures of relative consumption and measures of consumer movement can provide distinct and complementary insights into the strength, causes, and consequences of preference. Despite the distinct value of inferring preference from measures of consumer movement, rigorous and biologically relevant analytical methods are lacking. We describe a simple, likelihood-based, biostatistical model for analyzing the transient dynamics of consumer movement in a paired-choice experiment. With experimental data consisting of repeated discrete measures of consumer location, the model can be used to estimate constant consumer attraction and leaving rates for two food choices, and differences in choice-specific attraction and leaving rates can be tested using model selection. The model enables calculation of transient and equilibrial probabilities of consumer-resource association, which could be incorporated into larger scale movement models. We explore the effect of experimental design on parameter estimation through stochastic simulation and describe methods to check that data meet model assumptions. Using a dataset of modest sample size, we illustrate the use of the model to draw inferences on consumer preference as well as underlying behavioral mechanisms. Finally, we include a user's guide and computer code scripts in R to facilitate use of the model by other researchers.

  7. Biological Warfare Agents, Toxins, Vectors and Pests as Biological Terrorism Agents

    DTIC Science & Technology

    2003-07-01

    number of positive answers. According to criterion, no effective prophylaxis or therapy, positive answer signifies the absence of effective ...likelihood that the agent will be used. There are not effective prophylaxis and therapy against for the bulk of enlisted agents and toxins if used as...difficult to imagine how it would be looked like mass- vaccination often maybe simultaneously against more than one disease. Toxins are effective and

  8. Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps

    NASA Astrophysics Data System (ADS)

    Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.

    2013-06-01

    Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.

  9. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  10. AUV SLAM and Experiments Using a Mechanical Scanning Forward-Looking Sonar

    PubMed Central

    He, Bo; Liang, Yan; Feng, Xiao; Nian, Rui; Yan, Tianhong; Li, Minghui; Zhang, Shujing

    2012-01-01

    Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods. PMID:23012549

  11. AUV SLAM and experiments using a mechanical scanning forward-looking sonar.

    PubMed

    He, Bo; Liang, Yan; Feng, Xiao; Nian, Rui; Yan, Tianhong; Li, Minghui; Zhang, Shujing

    2012-01-01

    Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods.

  12. Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.

    PubMed

    Helms Tillery, S I; Taylor, D M; Schwartz, A B

    2003-01-01

    We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.

  13. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  14. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  15. Gender inequities, relationship power, and childhood immunization uptake in Nigeria: a population-based cross-sectional study.

    PubMed

    Antai, Diddy

    2012-02-01

    This study aimed to simultaneously examine the association between multiple dimensions of gender inequities and full childhood immunization. A multilevel logistic regression analysis was performed on nationally representative sample data from the 2008 Nigeria Demographic and Health Survey, which included 33,385 women aged 15-49 years who had a total of 28,647 live-born children; 24,910 of these children were included in this study. A total of 4283 (17%) children had received full immunization. Children of women whose spouse did not contribute to household earnings had a higher likelihood of receiving full childhood immunization (odds ratio (OR) 1.96, 95% confidence interval (95% CI) 1.02-3.77), and children of women who lacked decision-making autonomy had a lower likelihood of receiving full childhood immunization (OR 0.74, 95% CI 0.60-0.91). The likelihood of receiving full childhood immunization was higher among female children (OR 1.28, 95% CI 1.06-1.54), Yoruba children (OR 2.45, 95% CI 1.19-4.26), and children resident in communities with low illiteracy (OR 1.82, 95% CI 1.06-3.12), but lower for children of birth order 5 or above (OR 0.64, 95% CI 0.45-0.96), children of women aged ≤ 24 years (OR 0.66, 95% CI 0.50-0.87) and 25-34 years (OR 0.79, 95% CI 0.63-0.99), children of women with no education (OR 0.33, 95% CI 0.21-0.54) and primary education (OR 0.66, 95% CI 0.45-0.97), as well as children of women resident in communities with high unemployment (OR 0.34, 95% CI 0.20-0.57). The woman being the sole provider for her family (i.e., having a spouse who did not contribute to household earnings) was associated with a higher likelihood of fully immunizing the child, and the woman lacking decision-making autonomy was associated with a lower likelihood of fully immunizing the child. These findings draw attention to the need for interventions aimed at promoting women's employment and earning possibilities, whilst changing gender-discriminatory attitudes within relationships, communities, and society in general. Copyright © 2011 International Society for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  16. Maximum likelihood estimation of the parameters of a bivariate Gaussian-Weibull distribution from machine stress-rated data

    Treesearch

    Steve P. Verrill; David E. Kretschmann; James W. Evans

    2016-01-01

    Two important wood properties are stiffness (modulus of elasticity, MOE) and bending strength (modulus of rupture, MOR). In the past, MOE has often been modeled as a Gaussian and MOR as a lognormal or a two- or threeparameter Weibull. It is well known that MOE and MOR are positively correlated. To model the simultaneous behavior of MOE and MOR for the purposes of wood...

  17. Influence diagnostics in meta-regression model.

    PubMed

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  19. Using multi-locus allelic sequence data to estimate genetic divergence among four Lilium (Liliaceae) cultivars

    PubMed Central

    Shahin, Arwa; Smulders, Marinus J. M.; van Tuyl, Jaap M.; Arens, Paul; Bakker, Freek T.

    2014-01-01

    Next Generation Sequencing (NGS) may enable estimating relationships among genotypes using allelic variation of multiple nuclear genes simultaneously. We explored the potential and caveats of this strategy in four genetically distant Lilium cultivars to estimate their genetic divergence from transcriptome sequences using three approaches: POFAD (Phylogeny of Organisms from Allelic Data, uses allelic information of sequence data), RAxML (Randomized Accelerated Maximum Likelihood, tree building based on concatenated consensus sequences) and Consensus Network (constructing a network summarizing among gene tree conflicts). Twenty six gene contigs were chosen based on the presence of orthologous sequences in all cultivars, seven of which also had an orthologous sequence in Tulipa, used as out-group. The three approaches generated the same topology. Although the resolution offered by these approaches is high, in this case there was no extra benefit in using allelic information. We conclude that these 26 genes can be widely applied to construct a species tree for the genus Lilium. PMID:25368628

  20. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  1. Schools, Schooling, and Children's Support of Their Aging Parents.

    PubMed

    Brauner-Otto, Sarah R

    2009-10-01

    Intergenerational transfers play an important role in individuals' lives across the life course. In this paper I pull together theories on intergenerational transfers and social change to inform our understanding of how changes in the educational context influence children's support of their parents. By examining multiple aspects of a couple's educational context, including husbands' and wives' education and exposure to schools, this paper provides new information on the mechanisms through which changes in social context influence children's support of their parents. Using data from a rural Nepalese area I use multilevel logistic regression to estimate the relationship between schooling, exposure to schools, and the likelihood of couples giving to their parents. I find that both schooling and exposure to schools itself have separate, opposite effects on support of aging parents. Higher levels of schooling for husbands was associated with a higher likelihood of having given support to husbands' parents. On the other hand, increased exposure to schools for husbands and wives was associated with a lower likelihood of having given to wives' parents. Findings constitute evidence that multiple motivations for intergenerational support exist simultaneously and are related to social context through different mechanisms.

  2. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  3. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  4. Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.

    DTIC Science & Technology

    1982-12-20

    Intervals. For more details on these test procedures refer to Gabriel [7J, Krishnaiah (CIlUj, [11]), Srivastava [16), and others. -3- As noted in Consul...723. (4] Consul, P. C. (1969), "The Exact Distributions of Likelihood Criteria for Different Hypotheses," in P. R. Krishnaiah (Ed.), Multivariate...1178. [7] Gabriel, K. R. (1969), "A Comparison of Some lethods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), Multivariate Analysis-lI

  5. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  6. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  7. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  8. Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects

    PubMed Central

    Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose

    2017-01-01

    Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257

  9. Small-scale deflagration cylinder test with velocimetry wall-motion diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooks, Daniel E; Hill, Larry G; Pierce, Timothy H

    Predicting the likelihood and effects of outcomes resultant from thermal initiation of explosives remains a significant challenge. For certain explosive formulations, the general outcome can be broadly predicted given knowledge of certain conditions. However, there remain unexplained violent events, and increased statistical understanding of outcomes as a function of many variables, or 'violence categorization,' is needed. Additionally, the development of an equation of state equivalent for deflagration would be very useful in predicting possible detailed event consequences using traditional hydrodynamic detonation moders. For violence categorization, it is desirable that testing be efficient, such that it is possible to statistically definemore » outcomes reliant on the processes of initiation of deflagration, steady state deflagration, and deflagration to detonation transitions. If the test simultaneously acquires information to inform models of violent deflagration events, overall predictive capabilities for event likelihood and consequence might improve remarkably. In this paper we describe an economical scaled deflagration cylinder test. The cyclotetramethylene tetranitramine (HMX) based explosive formu1lation PBX 9501 was tested using different temperature profiles in a thick-walled copper cylindrical confiner. This test is a scaled version of a recently demonstrated deflagration cylinder test, and is similar to several other thermal explosion tests. The primary difference is the passive velocimetry diagnostic, which enables measurement of confinement vessel wall velocities at failure, regardless of the timing and location of ignition.« less

  10. Extreme data compression for the CMB

    NASA Astrophysics Data System (ADS)

    Zablocki, Alan; Dodelson, Scott

    2016-04-01

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.

  11. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    NASA Astrophysics Data System (ADS)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  12. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  13. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  14. A Glossy Simultaneous Contrast: Conjoint Measurements of Gloss and Lightness

    PubMed Central

    Mamassian, Pascal

    2017-01-01

    Interactions between the albedo and the gloss on a surface are commonplace. Darker surfaces are perceived glossier (contrast gloss) than lighter surfaces and darker backgrounds can enhance perceived lightness of surfaces. We used maximum likelihood conjoint measurements to simultaneously quantify the strength of those effects. We quantified the extent to which albedo can influence perceived gloss and physical gloss can influence perceived lightness. We modeled the contribution of lightness and gloss and found that increasing lightness reduced perceived gloss by about 32% whereas gloss had a much weaker influence on perceived lightness of about 12%. Moreover, we also investigated how different backgrounds contribute to the perception of lightness and gloss of a surface placed in front. We found that a glossy background reduces slightly perceived lightness of the center and simultaneously enhances its perceived gloss. Lighter backgrounds reduce perceived gloss and perceived lightness. Conjoint measurements lead us to a better understanding of the contextual effects in gloss and lightness perception. Not only do we confirm the importance of contrast in gloss perception and the reduction of the simultaneous contrast with glossy backgrounds, but we also quantify precisely the strength of those effects. PMID:28203352

  15. Classification of cryo electron microscopy images, noisy tomographic images recorded with unknown projection directions, by simultaneously estimating reconstructions and application to an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22

    NASA Astrophysics Data System (ADS)

    Lee, Junghoon; Zheng, Yili; Yin, Zhye; Doerschuk, Peter C.; Johnson, John E.

    2010-08-01

    Cryo electron microscopy is frequently used on biological specimens that show a mixture of different types of object. Because the electron beam rapidly destroys the specimen, the beam current is minimized which leads to noisy images (SNR substantially less than 1) and only one projection image per object (with an unknown projection direction) is collected. For situations where the objects can reasonably be described as coming from a finite set of classes, an approach based on joint maximum likelihood estimation of the reconstruction of each class and then use of the reconstructions to label the class of each image is described and demonstrated on two challenging problems: an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22.

  16. Reassignment of scattered emission photons in multifocal multiphoton microscopy.

    PubMed

    Cha, Jae Won; Singh, Vijay Raj; Kim, Ki Hean; Subramanian, Jaichandar; Peng, Qiwen; Yu, Hanry; Nedivi, Elly; So, Peter T C

    2014-06-05

    Multifocal multiphoton microscopy (MMM) achieves fast imaging by simultaneously scanning multiple foci across different regions of specimen. The use of imaging detectors in MMM, such as CCD or CMOS, results in degradation of image signal-to-noise-ratio (SNR) due to the scattering of emitted photons. SNR can be partly recovered using multianode photomultiplier tubes (MAPMT). In this design, however, emission photons scattered to neighbor anodes are encoded by the foci scan location resulting in ghost images. The crosstalk between different anodes is currently measured a priori, which is cumbersome as it depends specimen properties. Here, we present the photon reassignment method for MMM, established based on the maximum likelihood (ML) estimation, for quantification of crosstalk between the anodes of MAPMT without a priori measurement. The method provides the reassignment of the photons generated by the ghost images to the original spatial location thus increases the SNR of the final reconstructed image.

  17. Optimal design and use of retry in fault tolerant real-time computer systems

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; Shin, K. G.

    1983-01-01

    A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.

  18. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  19. Detection, Identification, Location, and Remote Sensing using SAW RFID Sensor Tags

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2009-01-01

    In this presentation, we will consider the problem of simultaneous detection, identification, location estimation, and remote sensing for multiple objects. In particular, we will describe the design and testing of a wireless system capable of simultaneously detecting the presence of multiple objects, identifying each object, and acquiring both a low-resolution estimate of location and a high-resolution estimate of temperature for each object based on wireless interrogation of passive surface acoustic wave (SAW) radiofrequency identification (RFID) sensor tags affixed to each object. The system is being studied for application on the lunar surface as well as for terrestrial remote sensing applications such as pre-launch monitoring and testing of spacecraft on the launch pad and monitoring of test facilities. The system utilizes a digitally beam-formed planar receiving antenna array to extend range and provide direction-of-arrival information coupled with an approximate maximum-likelihood signal processing algorithm to provide near-optimal estimation of both range and temperature. The system is capable of forming a large number of beams within the field of view and resolving the information from several tags within each beam. The combination of both spatial and waveform discrimination provides the capability to track and monitor telemetry from a large number of objects appearing simultaneously within the field of view of the receiving array. In the presentation, we will summarize the system design and illustrate several aspects of the operational characteristics and signal structure. We will examine the theoretical performance characteristics of the system and compare the theoretical results with results obtained from experiments in both controlled laboratory environments and in the field.

  20. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  1. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    ERIC Educational Resources Information Center

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  2. Anticipating cognitive effort: roles of perceived error-likelihood and time demands.

    PubMed

    Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F

    2017-11-13

    Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.

  3. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  4. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    PubMed

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  6. Extreme data compression for the CMB

    DOE PAGES

    Zablocki, Alan; Dodelson, Scott

    2016-04-28

    We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l, and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with themore » data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum C l. The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory C l as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. Furthermore, after showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.« less

  7. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  8. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  9. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  10. Higher level phylogeny and the first divergence time estimation of Heteroptera (Insecta: Hemiptera) based on multiple genes.

    PubMed

    Li, Min; Tian, Ying; Zhao, Ying; Bu, Wenjun

    2012-01-01

    Heteroptera, or true bugs, are the largest, morphologically diverse and economically important group of insects with incomplete metamorphosis. However, the phylogenetic relationships within Heteroptera are still in dispute and most of the previous studies were based on morphological characters or with single gene (partial or whole 18S rDNA). Besides, so far, divergence time estimates for Heteroptera totally rely on the fossil record, while no studies have been performed on molecular divergence rates. Here, for the first time, we used maximum parsimony (MP), maximum likelihood (ML) and Bayesian inference (BI) with multiple genes (18S rDNA, 28S rDNA, 16S rDNA and COI) to estimate phylogenetic relationships among the infraorders, and meanwhile, the Penalized Likelihood (r8s) and Bayesian (BEAST) molecular dating methods were employed to estimate divergence time of higher taxa of this suborder. Major results of the present study included: Nepomorpha was placed as the most basal clade in all six trees (MP trees, ML trees and Bayesian trees of nuclear gene data and four-gene combined data, respectively) with full support values. The sister-group relationship of Cimicomorpha and Pentatomomorpha was also strongly supported. Nepomorpha originated in early Triassic and the other six infraorders originated in a very short period of time in middle Triassic. Cimicomorpha and Pentatomomorpha underwent a radiation at family level in Cretaceous, paralleling the proliferation of the flowering plants. Our results indicated that the higher-group radiations within hemimetabolous Heteroptera were simultaneously with those of holometabolous Coleoptera and Diptera which took place in the Triassic. While the aquatic habitat was colonized by Nepomorpha already in the Triassic, the Gerromorpha independently adapted to the semi-aquatic habitat in the Early Jurassic.

  11. Higher Level Phylogeny and the First Divergence Time Estimation of Heteroptera (Insecta: Hemiptera) Based on Multiple Genes

    PubMed Central

    Zhao, Ying; Bu, Wenjun

    2012-01-01

    Heteroptera, or true bugs, are the largest, morphologically diverse and economically important group of insects with incomplete metamorphosis. However, the phylogenetic relationships within Heteroptera are still in dispute and most of the previous studies were based on morphological characters or with single gene (partial or whole 18S rDNA). Besides, so far, divergence time estimates for Heteroptera totally rely on the fossil record, while no studies have been performed on molecular divergence rates. Here, for the first time, we used maximum parsimony (MP), maximum likelihood (ML) and Bayesian inference (BI) with multiple genes (18S rDNA, 28S rDNA, 16S rDNA and COI) to estimate phylogenetic relationships among the infraorders, and meanwhile, the Penalized Likelihood (r8s) and Bayesian (BEAST) molecular dating methods were employed to estimate divergence time of higher taxa of this suborder. Major results of the present study included: Nepomorpha was placed as the most basal clade in all six trees (MP trees, ML trees and Bayesian trees of nuclear gene data and four-gene combined data, respectively) with full support values. The sister-group relationship of Cimicomorpha and Pentatomomorpha was also strongly supported. Nepomorpha originated in early Triassic and the other six infraorders originated in a very short period of time in middle Triassic. Cimicomorpha and Pentatomomorpha underwent a radiation at family level in Cretaceous, paralleling the proliferation of the flowering plants. Our results indicated that the higher-group radiations within hemimetabolous Heteroptera were simultaneously with those of holometabolous Coleoptera and Diptera which took place in the Triassic. While the aquatic habitat was colonized by Nepomorpha already in the Triassic, the Gerromorpha independently adapted to the semi-aquatic habitat in the Early Jurassic. PMID:22384163

  12. Diagnosis of cervical cells based on fractal and Euclidian geometrical measurements: Intrinsic Geometric Cellular Organization

    PubMed Central

    2014-01-01

    Background Fractal geometry has been the basis for the development of a diagnosis of preneoplastic and neoplastic cells that clears up the undetermination of the atypical squamous cells of undetermined significance (ASCUS). Methods Pictures of 40 cervix cytology samples diagnosed with conventional parameters were taken. A blind study was developed in which the clinic diagnosis of 10 normal cells, 10 ASCUS, 10 L-SIL and 10 H-SIL was masked. Cellular nucleus and cytoplasm were evaluated in the generalized Box-Counting space, calculating the fractal dimension and number of spaces occupied by the frontier of each object. Further, number of pixels occupied by surface of each object was calculated. Later, the mathematical features of the measures were studied to establish differences or equalities useful for diagnostic application. Finally, the sensibility, specificity, negative likelihood ratio and diagnostic concordance with Kappa coefficient were calculated. Results Simultaneous measures of the nuclear surface and the subtraction between the boundaries of cytoplasm and nucleus, lead to differentiate normality, L-SIL and H-SIL. Normality shows values less than or equal to 735 in nucleus surface and values greater or equal to 161 in cytoplasm-nucleus subtraction. L-SIL cells exhibit a nucleus surface with values greater than or equal to 972 and a subtraction between nucleus-cytoplasm higher to 130. L-SIL cells show cytoplasm-nucleus values less than 120. The rank between 120–130 in cytoplasm-nucleus subtraction corresponds to evolution between L-SIL and H-SIL. Sensibility and specificity values were 100%, the negative likelihood ratio was zero and Kappa coefficient was equal to 1. Conclusions A new diagnostic methodology of clinic applicability was developed based on fractal and euclidean geometry, which is useful for evaluation of cervix cytology. PMID:24742118

  13. Pregnant Women's Perspectives on Expanded Carrier Screening.

    PubMed

    Propst, Lauren; Connor, Gwendolyn; Hinton, Megan; Poorvu, Tabitha; Dungan, Jeffrey

    2018-02-23

    Expanded carrier screening (ECS) is a relatively new carrier screening option that assesses many conditions simultaneously, as opposed to traditional ethnicity-based carrier screening for a limited number of conditions. This study aimed to explore pregnant women's perspectives on ECS, including reasons for electing or declining and anxiety associated with this decision-making. A total of 80 pregnant women were surveyed from Northwestern Medicine's Clinical Genetics Division after presenting for aneuploidy screening. Of the 80 participants, 40 elected and 40 declined ECS. Trends regarding reasons for electing or declining ECS include ethnicity, desire for genetic risk information, lack of family history, perceived likelihood of being a carrier, and perceived impact on reproductive decisions. Individuals who declined ECS seemed to prefer ethnicity-based carrier screening and believed that ECS would increase their anxiety, whereas individuals who elected ECS seemed to prefer more screening and tended to believe that ECS would reduce their anxiety. These findings provide insight on decision-making with regard to ECS and can help guide interactions that genetic counselors and other healthcare providers have with patients, including assisting patients in the decision-making process.

  14. An overall strategy based on regression models to estimate relative survival and model the effects of prognostic factors in cancer survival studies.

    PubMed

    Remontet, L; Bossard, N; Belot, A; Estève, J

    2007-05-10

    Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.

  15. Quantitative modeling of reservoir-triggered seismicity

    NASA Astrophysics Data System (ADS)

    Hainzl, S.; Catalli, F.; Dahm, T.; Heinicke, J.; Woith, H.

    2017-12-01

    Reservoir-triggered seismicity might occur as the response to the crustal stress caused by the poroelastic response to the weight of the water volume and fluid diffusion. Several cases of high correlations have been found in the past decades. However, crustal stresses might be altered by many other processes such as continuous tectonic stressing and coseismic stress changes. Because reservoir-triggered stresses decay quickly with distance, even tidal or rainfall-triggered stresses might be of similar size at depth. To account for simultaneous stress sources in a physically meaningful way, we apply a seismicity model based on calculated stress changes in the crust and laboratory-derived friction laws. Based on the observed seismicity, the model parameters can be determined by maximum likelihood method. The model leads to quantitative predictions of the variations of seismicity rate in space and time which can be used for hypothesis testing and forecasting. For case studies in Talala (India), Val d'Agri (Italy) and Novy Kostel (Czech Republic), we show the comparison of predicted and observed seismicity, demonstrating the potential and limitations of the approach.

  16. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  17. Application of the Elaboration Likelihood Model of Attitude Change to Assertion Training.

    ERIC Educational Resources Information Center

    Ernst, John M.; Heesacker, Martin

    1993-01-01

    College students (n=113) participated in study comparing effects of elaboration likelihood model (ELM) based assertion workshop with those of typical assertion workshop. ELM-based workshop was significantly better at producing favorable attitude change, greater intention to act assertively, and more favorable evaluations of workshop content.…

  18. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  19. An Informative Interpretation of Decision Theory: The Information Theoretic Basis for Signal-to-Noise Ratio and Log Likelihood Ratio

    DOE PAGES

    Polcari, J.

    2013-08-16

    The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less

  20. A longitudinal analysis of the impact of hospital service line profitability on the likelihood of readmission.

    PubMed

    Navathe, Amol S; Volpp, Kevin G; Konetzka, R Tamara; Press, Matthew J; Zhu, Jingsan; Chen, Wei; Lindrooth, Richard C

    2012-08-01

    Quality of care may be linked to the profitability of admissions in addition to level of reimbursement. Prior policy reforms reduced payments that differentially affected the average profitability of various admission types. The authors estimated a Cox competing risks model, controlling for the simultaneous risk of mortality post discharge, to determine whether the average profitability of hospital service lines to which a patient was admitted was associated with the likelihood of readmission within 30 days. The sample included 12,705,933 Medicare Fee for Service discharges from 2,438 general acute care hospitals during 1997, 2001, and 2005. There was no evidence of an association between changes in average service line profitability and changes in readmission risk, even when controlling for risk of mortality. These findings are reassuring in that the profitability of patients' admissions did not affect readmission rates, and together with other evidence may suggest that readmissions are not an unambiguous quality indicator for in-hospital care.

  1. 16S rRNA gene-based phylogenetic microarray for simultaneous identification of members of the genus Burkholderia.

    PubMed

    Schönmann, Susan; Loy, Alexander; Wimmersberger, Céline; Sobek, Jens; Aquino, Catharine; Vandamme, Peter; Frey, Beat; Rehrauer, Hubert; Eberl, Leo

    2009-04-01

    For cultivation-independent and highly parallel analysis of members of the genus Burkholderia, an oligonucleotide microarray (phylochip) consisting of 131 hierarchically nested 16S rRNA gene-targeted oligonucleotide probes was developed. A novel primer pair was designed for selective amplification of a 1.3 kb 16S rRNA gene fragment of Burkholderia species prior to microarray analysis. The diagnostic performance of the microarray for identification and differentiation of Burkholderia species was tested with 44 reference strains of the genera Burkholderia, Pandoraea, Ralstonia and Limnobacter. Hybridization patterns based on presence/absence of probe signals were interpreted semi-automatically using the novel likelihood-based strategy of the web-tool Phylo- Detect. Eighty-eight per cent of the reference strains were correctly identified at the species level. The evaluated microarray was applied to investigate shifts in the Burkholderia community structure in acidic forest soil upon addition of cadmium, a condition that selected for Burkholderia species. The microarray results were in agreement with those obtained from phylogenetic analysis of Burkholderia 16S rRNA gene sequences recovered from the same cadmiumcontaminated soil, demonstrating the value of the Burkholderia phylochip for determinative and environmental studies.

  2. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  3. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  4. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  5. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  6. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  7. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  8. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  9. Rules or consequences? The role of ethical mind-sets in moral dynamics.

    PubMed

    Cornelissen, Gert; Bashshur, Michael R; Rode, Julian; Le Menestrel, Marc

    2013-04-01

    Recent research on the dynamics of moral behavior has documented two contrasting phenomena-moral consistency and moral balancing. Moral balancing refers to the phenomenon whereby behaving ethically or unethically decreases the likelihood of engaging in the same type of behavior again later. Moral consistency describes the opposite pattern-engaging in ethical or unethical behavior increases the likelihood of engaging in the same type of behavior later on. The three studies reported here supported the hypothesis that individuals' ethical mind-set (i.e., outcome-based vs. rule-based) moderates the impact of an initial ethical or unethical act on the likelihood of behaving ethically on a subsequent occasion. More specifically, an outcome-based mind-set facilitated moral balancing, and a rule-based mind-set facilitated moral consistency.

  10. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  11. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  12. Wildlife tradeoffs based on landscape models of habitat

    USGS Publications Warehouse

    Loehle, C.; Mitchell, M.S.

    2000-01-01

    It is becoming increasingly clear that the spatial structure of landscapes affects the habitat choices and abundance of wildlife. In contrast to wildlife management based on preservation of critical habitat features such as nest sites on a beach or mast trees, it has not been obvious how to incorporate spatial structure into management plans. We present techniques to accomplish this goal. We used multiscale logistic regression models developed previously for neotropical migrant bird species habitat use in South Carolina (USA) as a basis for these techniques. Based on these models we used a spatial optimization technique to generate optimal maps (probability of occurrence, P = 1.0) for each of seven species. To emulate management of a forest for maximum species diversity, we defined the objective function of the algorithm as the sum of probabilities over the seven species, resulting in a complex map that allowed all seven species to coexist. The map that allowed for coexistence is not obvious, must be computed algorithmically, and would be difficult to realize using rules of thumb for habitat management. To assess how management of a forest for a single species of interest might affect other species, we analyzed tradeoffs by gradually increasing the weighting on a single species in the objective function over a series of simulations. We found that as habitat was increasingly modified to favor that species, the probability of presence for two of the other species was driven to zero. This shows that whereas it is not possible to simultaneously maximize the likelihood of presence for multiple species with divergent habitat preferences, compromise solutions are possible at less than maximal likelihood in many cases. Our approach suggests that efficiency of habitat management for species diversity can by maximized for even small landscapes by incorporating spatial context. The methods we present are suitable for wildlife management, endangered species conservation, and nature reserve design.

  13. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  14. The influences of parental divorce and maternal-versus-paternal alcohol abuse on offspring lifetime suicide attempt.

    PubMed

    Thompson, Ronald G; Alonzo, Dana; Hu, Mei-Chen; Hasin, Deborah S

    2017-05-01

    Research indicates that parental divorce and parental alcohol abuse independently increase likelihood of offspring lifetime suicide attempt. However, when experienced together, only parental alcohol abuse significantly increased odds of suicide attempt. It is unclear to what extent differences in the effect of maternal versus paternal alcohol use exist on adult offspring lifetime suicide attempt risk. This study examined the influences of parental divorce and maternal-paternal histories of alcohol problems on adult offspring lifetime suicide attempt. The sample consisted of participants from the 2001-2002 National Epidemiological Survey on Alcohol and Related Conditions. The simultaneous effect of childhood or adolescent parental divorce and maternal and paternal history of alcohol problems on offspring lifetime suicide attempt was estimated using a logistic regression model with an interaction term for demographics and parental history of other emotional and behavioural problems. Parental divorce and maternal-paternal alcohol problems interacted to differentially influence the likelihood of offspring lifetime suicide attempt. Experiencing parental divorce and either maternal or paternal alcohol problems nearly doubled the likelihood of suicide attempt. Divorce and history of alcohol problems for both parents tripled the likelihood. Individuals who experienced parental divorce as children or adolescents and who have a parent who abuses alcohol are at elevated risk for lifetime suicide attempt. These problem areas should become a routine part of assessment to better identify those at risk for lifetime suicide attempt and to implement early and targeted intervention to decrease such risk. [Thompson RG Jr,Alonzo D, Hu M-C, Hasin DS. The influences of parental divorce and maternal-versus-paternal alcohol abuse on offspringlifetime suicide attempt. Drug Alcohol Rev 2017;36:408-414]. © 2016 Australasian Professional Society on Alcohol and other Drugs.

  15. Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine

    2015-08-01

    We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.

  16. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  17. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  18. Internalized HIV and Drug Stigmas: Interacting Forces Threatening Health Status and Health Service Utilization Among People with HIV Who Inject Drugs in St. Petersburg, Russia

    PubMed Central

    Burke, Sara E.; Dovidio, John F.; Levina, Olga S.; Uusküla, Anneli; Niccolai, Linda M.; Heimer, Robert

    2016-01-01

    Marked overlap between the HIV and injection drug use epidemics in St. Petersburg, Russia, puts many people in need of health services at risk for stigmatization based on both characteristics simultaneously. The current study examined the independent and interactive effects of internalized HIV and drug stigmas on health status and health service utilization among 383 people with HIV who inject drugs in St. Petersburg. Participants self-reported internalized HIV stigma, internalized drug stigma, health status (subjective rating and symptom count), health service utilization (HIV care and drug treatment), sociodemographic characteristics, and health/behavioral history. For both forms of internalized stigma, greater stigma was correlated with poorer health and lower likelihood of service utilization. HIV and drug stigmas interacted to predict symptom count, HIV care, and drug treatment such that individuals internalizing high levels of both stigmas were at elevated risk for experiencing poor health and less likely to access health services. PMID:26050155

  19. C-arm technique using distance driven method for nephrolithiasis and kidney stones detection

    NASA Astrophysics Data System (ADS)

    Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun

    2016-04-01

    Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.

  20. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  1. Handwriting individualization using distance and rarity

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Srihari, Sargur; Srinivasan, Harish

    2012-01-01

    Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.

  2. Penalized maximum likelihood simultaneous longitudinal PET image reconstruction with difference-image priors.

    PubMed

    Ellis, Sam; Reader, Andrew J

    2018-04-26

    Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example, to observe and quantitate changes in functional behaviour in tumors after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalizing voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high-activity lesions. Here, we present two additional novel longitudinal difference-image priors and evaluate their performance using two-dimesional (2D) simulation studies and a three-dimensional (3D) real dataset case study. We have previously proposed a simultaneous difference-image-based penalized maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have (a) low entropy (DE-PML), and (b) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D-simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumor datasets and compared to standard maximum likelihood expectation-maximization (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumor behaviour, and interscan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard reconstructions with increased counts levels. In tumor regions, each method produces subtly different results in terms of preservation of tumor quantitation and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumor means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumor. Across a range of tumor responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalizing difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantitation of mean intensity via DE-PML, or in terms of tumor RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  3. Statistical fusion of continuous labels: identification of cardiac landmarks

    NASA Astrophysics Data System (ADS)

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L.; Landman, Bennett A.

    2011-03-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  4. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    PubMed

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  5. Statistical Fusion of Continuous Labels: Identification of Cardiac Landmarks.

    PubMed

    Xing, Fangxu; Soleimanifard, Sahar; Prince, Jerry L; Landman, Bennett A

    2011-01-01

    Image labeling is an essential task for evaluating and analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms. However, both approaches for labeling suffer from inevitable error due to noise and artifact in the acquired data. The Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm was developed to combine multiple rater decisions and simultaneously estimate unobserved true labels as well as each rater's level of performance (i.e., reliability). A generalization of STAPLE for the case of continuous-valued labels has also been proposed. In this paper, we first show that with the proposed Gaussian distribution assumption, this continuous STAPLE formulation yields equivalent likelihoods for the bias parameter, meaning that the bias parameter-one of the key performance indices-is actually indeterminate. We resolve this ambiguity by augmenting the STAPLE expectation maximization formulation to include a priori probabilities on the performance level parameters, which enables simultaneous, meaningful estimation of both the rater bias and variance performance measures. We evaluate and demonstrate the efficacy of this approach in simulations and also through a human rater experiment involving the identification the intersection points of the right ventricle to the left ventricle in CINE cardiac data.

  6. Alcohol and marijuana use patterns associated with unsafe driving among U.S. high school seniors: high use frequency, concurrent use, and simultaneous use.

    PubMed

    Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D

    2014-05-01

    This article examines noncausal associations between high school seniors' alcohol and marijuana use status and rates of self-reported unsafe driving in the past 12 months. Analyses used data from 72,053 students collected through annual surveys of nationally representative cross-sectional samples of U.S. 12th-grade students from 1976 to 2011. Two aspects of past-12-month alcohol and marijuana use were examined: (a) use frequency and (b) status as a nonuser, single substance user, concurrent user, or simultaneous user. Measures of past-12-month unsafe driving included any tickets/warnings or accidents, as well as tickets/warnings or accidents following alcohol or marijuana use. Analyses explored whether an individual's substance use frequency and simultaneous use status had differential associations with their rate of unsafe driving. Higher substance use frequency (primarily alcohol use frequency) was significantly and positively associated with unsafe driving. The rate of engaging in any unsafe driving was also significantly and positively associated with simultaneous use status, with the highest rate associated with simultaneous use, followed by concurrent use, followed by use of alcohol alone. Individuals who reported simultaneous use most or every time they used marijuana had the highest likelihood of reporting unsafe driving following either alcohol or marijuana use. This article expands the knowledge on individual risk factors associated with unsafe driving among teens. Efforts to educate U.S. high school students (especially substance users), parents, and individuals involved in prevention programming and driver's education about the increased risks associated with various forms of drug use status may be useful.

  7. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  8. A multi-valued neutrosophic qualitative flexible approach based on likelihood for multi-criteria decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Yang, Wu-E.

    2017-01-01

    In this paper, multi-criteria decision-making (MCDM) problems based on the qualitative flexible multiple criteria method (QUALIFLEX), in which the criteria values are expressed by multi-valued neutrosophic information, are investigated. First, multi-valued neutrosophic sets (MVNSs), which allow the truth-membership function, indeterminacy-membership function and falsity-membership function to have a set of crisp values between zero and one, are introduced. Then the likelihood of multi-valued neutrosophic number (MVNN) preference relations is defined and the corresponding properties are also discussed. Finally, an extended QUALIFLEX approach based on likelihood is explored to solve MCDM problems where the assessments of alternatives are in the form of MVNNs; furthermore an example is provided to illustrate the application of the proposed method, together with a comparison analysis.

  9. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  10. Prevalence and factors associated with the co-occurrence of health risk behaviors in adolescents

    PubMed Central

    Brito, Anísio Luiz da Silva; Hardman, Carla Meneses; de Barros, Mauro Virgílio Gomes

    2015-01-01

    Objective: To analyze the prevalence and factors associated with the co-occurrence of health risk behaviors in adolescents. Methods: A cross-sectional study was performed with a sample of high school students from state public schools in Pernambuco, Brazil (n=4207, 14-19 years old). Data were obtained using a questionnaire. The co-occurrence of health risk behaviors was established based on the sum of five behavioral risk factors (low physical activity, sedentary behavior, low consumption of fruits/vegetables, alcohol consumption and tobacco use). The independent variables were gender, age group, time of day attending school, school size, maternal education, occupational status, skin color, geographic region and place of residence. Data were analyzed by ordinal logistic regression with proportional odds model. Results: Approximately 10% of adolescents were not exposed to health risk behaviors, while 58.5% reported being exposed to at least two health risk behaviors simultaneously. There was a higher likelihood of co-occurrence of health risk behaviors among adolescents in the older age group, with intermediate maternal education (9-11 years of schooling), and who reported living in the driest (semi-arid) region of the state of Pernambuco. Adolescents who reported having a job and living in rural areas had a lower likelihood of co-occurrence of risk behaviors. Conclusions: The findings suggest a high prevalence of co-occurrence of health risk behaviors in this group of adolescents, with a higher chance in five subgroups (older age, intermediate maternal education, the ones that reported not working, those living in urban areas and in the driest region of the state). PMID:26298656

  11. Clinical Correlates of Co-Occurring Cannabis and Tobacco Use: A Systematic Review

    PubMed Central

    Peters, Erica N.; Budney, Alan J.; Carroll, Kathleen M.

    2012-01-01

    Aims A growing literature has documented the substantial prevalence of and putative mechanisms underlying co-occurring (i.e., concurrent or simultaneous) cannabis and tobacco use. Greater understanding of the clinical correlates of co-occurring cannabis and tobacco use may suggest how intervention strategies may be refined to improve cessation outcomes and decrease the public health burden associated with cannabis and tobacco use. Methods A systematic review of the literature on clinical diagnoses, psychosocial problems, and outcomes associated with co-occurring cannabis and tobacco use. Twenty-eight studies compared clinical correlates in co-occurring cannabis and tobacco users vs. cannabis or tobacco only users. These included studies of treatment-seekers in clinical trials and non-treatment-seekers in cross-sectional or longitudinal epidemiological or non-population-based surveys. Results Sixteen studies examined clinical diagnoses, four studies examined psychosocial problems, and 11 studies examined cessation outcomes in co-occurring cannabis and tobacco users (several studies examined multiple clinical correlates). Relative to cannabis use only, co-occurring cannabis and tobacco use was associated with a greater likelihood of cannabis use disorders, more psychosocial problems, and poorer cannabis cessation outcomes. Relative to tobacco use only, co-occurring use did not appear to be consistently associated with a greater likelihood of tobacco use disorders, more psychosocial problems, nor poorer tobacco cessation outcomes. Conclusions Cannabis users who also smoke tobacco are more dependent on cannabis, have more psychosocial problems, and have poorer cessation outcomes than those who use cannabis but not tobacco. The converse does not appear to be the case. PMID:22340422

  12. Modified Multiple Model Adaptive Estimation (M3AE) for Simultaneous Parameter and State Estimation

    DTIC Science & Technology

    1998-03-01

    Contents Page Dedication : iv Acknowledgments v Table Of Contents vi List of Figures . . ; x List of Tables xv Abstract xvii Chapter 1 ...INTRODUCTION 1 1.1 Overview 1 1.2 Background 7 1.2.1 The Chi-Square Test 9 1.2.2 Generalized Likelihood Ratio (GLR) Testing 10 1.2.3 Multiple...M3AE Covariance Analysis 115 4.1.3 Simulations and Performance Analysis 121 4.1.3.1 Test Case 1 : aT = 32.0 124 4.1.3.2 Test Case 2: aT = 37.89, and

  13. Globalization and psychology.

    PubMed

    Chiu, Chi-Yue; Kwan, Letty Yan-Yee

    2016-04-01

    In globalized societies, people often encounter symbols of diverse cultures in the same space at the same time. Simultaneous exposure to diverse cultures draws people's attention to cultural differences and promotes catergorical perceptions of culture. Local cultural identification and presence of cultural threat increase the likelihood of resisting inflow of foreign cultures (exclusionary reactions). When cultures are seen as intellectual resources, foreign cultural exposure affords intercultural learning and enhances individual creativity (integrative reactions). Psychological studies of globalization attest to the utility of treating cultures as evolving, interacting systems, rather than static, independent entities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. The comorbidity of increased arterial stiffness and microalbuminuria in a survey of middle-aged adults in China.

    PubMed

    Miao, Rujia; Wu, Liuxin; Ni, Ping; Zeng, Yue; Chen, Zhiheng

    2018-05-04

    Increased arterial stiffness (iAS) and microalbuminuria (MAU), which may occur simultaneously or separately in the general population and share similar risk factors, are markers of macro- and microvascular injuries. Our research investigated the comorbidity of iAS and MAU in the middle-aged population and examined the heterogeneous effects of metabolic risk factors on iAS and MAU. We selected 11,911 individuals aged 45 to 60 years who underwent a health examination at the 3rd Xiangya Hospital between 2010 and 2014. Metabolic syndrome (MetS) was determined according to IDF/NHLBI/AHA-2009 criteria. Multinomial logistic regression was applied to evaluate the influence of MetS, components of MetS and clusters of MetS on the co-occurrence (MAU(+)/iAS(+)) or non-co-occurrence (MAU(+)/iAS(-) and MAU(-)/iAS(+)) of MAU and iAS. Reference group was MAU(-)/iAS(-). A positive effect of MetS on the presence of MAU(+)/iAS(-), MAU(-)/iAS(+), or MAU(+)/iAS(+) is listed in ascending order based on odds ratios (ORs = 2.11, 2.41, 4.61, respectively; P < 0.05). Compared with MAU(+)/iAS(-), Elevated blood pressure (BP) (OR = 1.62 vs. 4.83, P < 0.05), triglycerides(TG) (OR = 1.20 vs. 1.37, P < 0.05) were more strongly associated with MAU(-)/iAS(+), whereas fasting blood glucose (FBG) was less associated (OR = 1.37 vs. 1.31, P < 0.05). Decreased high-density lipoprotein cholesterol(HDL-c) (OR = 1.84, P < 0.01) and elevated waist circumference(WC) (OR = 1.28 P < 0.01) were the most strongly associated with MAU(+)/iAS(-). Compared with the individuals without MetS, individuals with the elevated BP, FBG, TG and decreased HDL-c cluster had the greatest likelihood of presenting a MAU(-)/iAS(+) (OR = 5.98, P < 0.01) and MAU(+)/iAS(+) (OR = 13.17, P < 0.01), these likelihood was even greater than the cluster with simultaneous alteration in all five MetS components (OR = 3.89 and 10.77, respectively, P < 0.01), which showed the most strongly association with MAU(+)/iAS(+) (OR = 5.22, P < 0.01). Based on the heterogeneous influences of MetS-related risk factors on MAU and iAS, these influences could be selectively targeted to identify different types of vascular injuries.

  15. Meta-analysis: accuracy of rapid tests for malaria in travelers returning from endemic areas.

    PubMed

    Marx, Arthur; Pewsner, Daniel; Egger, Matthias; Nüesch, Reto; Bucher, Heiner C; Genton, Blaise; Hatz, Christoph; Jüni, Peter

    2005-05-17

    Microscopic diagnosis of malaria is unreliable outside specialized centers. Rapid tests have become available in recent years, but their accuracy has not been assessed systematically. To determine the accuracy of rapid diagnostic tests for ruling out malaria in nonimmune travelers returning from malaria-endemic areas. The authors searched MEDLINE, EMBASE, CAB Health, and CINAHL (1988 to September 2004); hand-searched conference proceedings; checked reference lists; and contacted experts and manufacturers. Diagnostic accuracy studies in nonimmune individuals with suspected malaria were included if they compared rapid tests with expert microscopic examination or polymerase chain reaction tests. Data on study and patient characteristics and results were extracted in duplicate. The main outcome was the likelihood ratio for a negative test result (negative likelihood ratio) for Plasmodium falciparum malaria. Likelihood ratios were combined by using random-effects meta-analysis, stratified by the antigen targeted (histidine-rich protein-2 [HRP-2] or parasite lactate dehydrogenase [LDH]) and by test generation. Nomograms of post-test probabilities were constructed. The authors included 21 studies and 5747 individuals. For P. falciparum, HRP-2-based tests were more accurate than parasite LDH-based tests: Negative likelihood ratios were 0.08 and 0.13, respectively (P = 0.019 for difference). Three-band HRP-2 tests had similar negative likelihood ratios but higher positive likelihood ratios compared with 2-band tests (34.7 vs. 98.5; P = 0.003). For P. vivax, negative likelihood ratios tended to be closer to 1.0 for HRP-2-based tests than for parasite LDH-based tests (0.24 vs. 0.13; P = 0.22), but analyses were based on a few heterogeneous studies. Negative likelihood ratios for the diagnosis of P. malariae or P. ovale were close to 1.0 for both types of tests. In febrile travelers returning from sub-Saharan Africa, the typical probability of P. falciparum malaria is estimated at 1.1% (95% CI, 0.6% to 1.9%) after a negative 3-band HRP-2 test result and 97% (CI, 92% to 99%) after a positive test result. Few studies evaluated 3-band HRP-2 tests. The evidence is also limited for species other than P. falciparum because of the few available studies and their more heterogeneous results. Further studies are needed to determine whether the use of rapid diagnostic tests improves outcomes in returning travelers with suspected malaria. Rapid malaria tests may be a useful diagnostic adjunct to microscopy in centers without major expertise in tropical medicine. Initial decisions on treatment initiation and choice of antimalarial drugs can be based on travel history and post-test probabilities after rapid testing. Expert microscopy is still required for species identification and confirmation.

  16. Modeling forest bird species' likelihood of occurrence in Utah with Forest Inventory and Analysis and Landfire map products and ecologically based pseudo-absence points

    Treesearch

    Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen

    2007-01-01

    Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...

  17. Remote sensing of multiple vital signs using a CMOS camera-equipped infrared thermography system and its clinical application in rapidly screening patients with suspected infectious diseases.

    PubMed

    Sun, Guanghao; Nakayama, Yosuke; Dagdanpurev, Sumiyakhand; Abe, Shigeto; Nishimura, Hidekazu; Kirimoto, Tetsuo; Matsui, Takemi

    2017-02-01

    Infrared thermography (IRT) is used to screen febrile passengers at international airports, but it suffers from low sensitivity. This study explored the application of a combined visible and thermal image processing approach that uses a CMOS camera equipped with IRT to remotely sense multiple vital signs and screen patients with suspected infectious diseases. An IRT system that produced visible and thermal images was used for image acquisition. The subjects' respiration rates were measured by monitoring temperature changes around the nasal areas on thermal images; facial skin temperatures were measured simultaneously. Facial blood circulation causes tiny color changes in visible facial images that enable the determination of the heart rate. A logistic regression discriminant function predicted the likelihood of infection within 10s, based on the measured vital signs. Sixteen patients with an influenza-like illness and 22 control subjects participated in a clinical test at a clinic in Fukushima, Japan. The vital-sign-based IRT screening system had a sensitivity of 87.5% and a negative predictive value of 91.7%; these values are higher than those of conventional fever-based screening approaches. Multiple vital-sign-based screening efficiently detected patients with suspected infectious diseases. It offers a promising alternative to conventional fever-based screening. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  19. The role of self-regulatory efficacy, moral disengagement and guilt on doping likelihood: A social cognitive theory perspective.

    PubMed

    Ring, Christopher; Kavussanu, Maria

    2018-03-01

    Given the concern over doping in sport, researchers have begun to explore the role played by self-regulatory processes in the decision whether to use banned performance-enhancing substances. Grounded on Bandura's (1991) theory of moral thought and action, this study examined the role of self-regulatory efficacy, moral disengagement and anticipated guilt on the likelihood to use a banned substance among college athletes. Doping self-regulatory efficacy was associated with doping likelihood both directly (b = -.16, P < .001) and indirectly (b = -.29, P < .001) through doping moral disengagement. Moral disengagement also contributed directly to higher doping likelihood and lower anticipated guilt about doping, which was associated with higher doping likelihood. Overall, the present findings provide evidence to support a model of doping based on Bandura's social cognitive theory of moral thought and action, in which self-regulatory efficacy influences the likelihood to use banned performance-enhancing substances both directly and indirectly via moral disengagement.

  20. Optimal HRF and smoothing parameters for fMRI time series within an autoregressive modeling framework.

    PubMed

    Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru

    2010-12-01

    The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.

  1. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Box-Cox transformation for QTL mapping.

    PubMed

    Yang, Runqing; Yi, Nengjun; Xu, Shizhong

    2006-01-01

    The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.

  3. Maximum likelihood sequence estimation for optical complex direct modulation.

    PubMed

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  4. Compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation.

    PubMed

    Christensen, Ole F

    2012-12-03

    Single-step methods provide a coherent and conceptually simple approach to incorporate genomic information into genetic evaluations. An issue with single-step methods is compatibility between the marker-based relationship matrix for genotyped animals and the pedigree-based relationship matrix. Therefore, it is necessary to adjust the marker-based relationship matrix to the pedigree-based relationship matrix. Moreover, with data from routine evaluations, this adjustment should in principle be based on both observed marker genotypes and observed phenotypes, but until now this has been overlooked. In this paper, I propose a new method to address this issue by 1) adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix instead of the reverse and 2) extending the single-step genetic evaluation using a joint likelihood of observed phenotypes and observed marker genotypes. The performance of this method is then evaluated using two simulated datasets. The method derived here is a single-step method in which the marker-based relationship matrix is constructed assuming all allele frequencies equal to 0.5 and the pedigree-based relationship matrix is constructed using the unusual assumption that animals in the base population are related and inbred with a relationship coefficient γ and an inbreeding coefficient γ / 2. Taken together, this γ parameter and a parameter that scales the marker-based relationship matrix can handle the issue of compatibility between marker-based and pedigree-based relationship matrices. The full log-likelihood function used for parameter inference contains two terms. The first term is the REML-log-likelihood for the phenotypes conditional on the observed marker genotypes, whereas the second term is the log-likelihood for the observed marker genotypes. Analyses of the two simulated datasets with this new method showed that 1) the parameters involved in adjusting marker-based and pedigree-based relationship matrices can depend on both observed phenotypes and observed marker genotypes and 2) a strong association between these two parameters exists. Finally, this method performed at least as well as a method based on adjusting the marker-based relationship matrix. Using the full log-likelihood and adjusting the pedigree-based relationship matrix to be compatible with the marker-based relationship matrix provides a new and interesting approach to handle the issue of compatibility between the two matrices in single-step genetic evaluation.

  5. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  6. Diagnosis of obstructive sleep apnea using pulse oximeter derived photoplethysmographic signals.

    PubMed

    Romem, Ayal; Romem, Anat; Koldobskiy, Dafna; Scharf, Steven M

    2014-03-15

    Increasing awareness of the high prevalence of obstructive sleep apnea (OSA) and its impact on health in conjunction with high cost, inconvenience, and short supply of in-lab polysomnography (PSG) has led to the development of more convenient, affordable, and accessible diagnostic devices. We evaluated the reliability and accuracy of a single-channel (finger pulse-oximetry) photoplethysmography (PPG)-based device for detection of OSA (Morpheus Ox). Among a cohort of 73 patients referred for in-laboratory evaluation of OSA, 65 were simultaneously monitored with the PPG based device while undergoing PSG. Among these, 19 had significant cardiopulmonary comorbidities. Using the PSG as the "gold standard," the sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), as well as the positive likelihood ratio (+LR) for an apnea hypopnea index (AHI)PSG > 5/h and AHIPSG > 15/h were calculated for the PPG. Valid results were available for 65 subjects. Mean age: 52.1 ± 14.2, Male: 52%, and BMI: 36.3 ± 9.7 kg/m(2). Positive correlation was found between PPG-derived and PSG-derived AHI (r = 0.81, p < 0.001). For AHIPSG > 5/h, sensitivity was 80%, specificity 86%, PPV 93%, NPV 68%, and +LR was 5.9. For AHIPSG > 15/h, sensitivity was 70%, specificity 91%, PPV 80%, NPV 85%, and +LR was 7.83. The corresponding areas under the receiver operator curves were 0.91 and 0.9. PPG-derived data compare well with simultaneous in-lab PSG in the diagnosis of suspected OSA among patients with and without cardiopulmonary comorbidities. Romem A; Romem A; Koldobskiy D; Scharf SM. Diagnosis of obstructive sleep apnea using pulse oximeter derived photoplethysmographic signals.

  7. Hippocampal effective synchronization values are not pre-seizure indicator without considering the state of the onset channels

    PubMed Central

    Shayegh, Farzaneh; Sadri, Saeed; Amirfattahi, Rassoul; Ansari-Asl, Karim; Bellanger, Jean-Jacques; Senhadji, Lotfi

    2014-01-01

    In this paper, a model-based approach is presented to quantify the effective synchrony between hippocampal areas from depth-EEG signals. This approach is based on the parameter identification procedure of a realistic Multi-Source/Multi-Channel (MSMC) hippocampal model that simulates the function of different areas of hippocampus. In the model it is supposed that the observed signals recorded using intracranial electrodes are generated by some hidden neuronal sources, according to some parameters. An algorithm is proposed to extract the intrinsic (solely relative to one hippocampal area) and extrinsic (coupling coefficients between two areas) model parameters, simultaneously, by a Maximum Likelihood (ML) method. Coupling coefficients are considered as the measure of effective synchronization. This work can be considered as an application of Dynamic Causal Modeling (DCM) that enables us to understand effective synchronization changes during transition from inter-ictal to pre -ictal state. The algorithm is first validated by using some synthetic datasets. Then by extracting the coupling coefficients of real depth-EEG signals by the proposed approach, it is observed that the coupling values show no significant difference between ictal, pre-ictal and inter-ictal states, i.e., either the increase or decrease of coupling coefficients has been observed in all states. However, taking the value of intrinsic parameters into account, pre-seizure state can be distinguished from inter-ictal state. It is claimed that seizures start to appear when there are seizure-related physiological parameters on the onset channel, and its coupling coefficient toward other channels increases simultaneously. As a result of considering both intrinsic and extrinsic parameters as the feature vector, inter-ictal, pre-ictal and ictal activities are discriminated from each other with an accuracy of 91.33% accuracy. PMID:25061815

  8. Using Bayesian Inference Framework towards Identifying Gas Species and Concentration from High Temperature Resistive Sensor Array Data

    DOE PAGES

    Liu, Yixin; Zhou, Kai; Lei, Yu

    2015-01-01

    High temperature gas sensors have been highly demanded for combustion process optimization and toxic emissions control, which usually suffer from poor selectivity. In order to solve this selectivity issue and identify unknown reducing gas species (CO, CH 4 , and CH 8 ) and concentrations, a high temperature resistive sensor array data set was built in this study based on 5 reported sensors. As each sensor showed specific responses towards different types of reducing gas with certain concentrations, based on which calibration curves were fitted, providing benchmark sensor array response database, then Bayesian inference framework was utilized to process themore » sensor array data and build a sample selection program to simultaneously identify gas species and concentration, by formulating proper likelihood between input measured sensor array response pattern of an unknown gas and each sampled sensor array response pattern in benchmark database. This algorithm shows good robustness which can accurately identify gas species and predict gas concentration with a small error of less than 10% based on limited amount of experiment data. These features indicate that Bayesian probabilistic approach is a simple and efficient way to process sensor array data, which can significantly reduce the required computational overhead and training data.« less

  9. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  10. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  11. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  12. An exclusive human milk-based diet in extremely premature infants reduces the probability of remaining on total parenteral nutrition: a reanalysis of the data.

    PubMed

    Ghandehari, Heli; Lee, Martin L; Rechtman, David J

    2012-04-25

    We have previously shown that an exclusively human milk-based diet is beneficial for extremely premature infants who are at risk for necrotizing enterocolitis (NEC). However, no significant difference in the other primary study endpoint, the length of time on total parenteral nutrition (TPN), was found. The current analysis re-evaluates these data from a different statistical perspective considering the probability or likelihood of needing TPN on any given day rather than the number of days on TPN. This study consisted of 207 premature infants randomized into three groups: one group receiving a control diet of human milk, formula and bovine-based fortifier ("control diet"), and the other two groups receiving only human milk and human milk-based fortifier starting at different times in the enteral feeding process (at feeding volumes of 40 or 100 mL/kg/day; "HM40" and "HM100", respectively). The counting process Cox proportional hazards survival model was used to determine the likelihood of needing TPN in each group. The two groups on the completely human-based diet had an 11-14 % reduction in the likelihood of needing nutrition via TPN when compared to infants on the control diet (p = 0.0001 and p = 0.001, respectively for the HM40 and HM100 groups, respectively). This was even more pronounced if the initial period of TPN was excluded (p < 0.0001 for both the HM40 and HM100 groups). A completely human milk-based diet significantly reduces the likelihood of TPN use for extremely premature infants when compared to a diet including cow-based products. This likelihood may be reduced even further when the human milk fortifier is initiated earlier in the feeding process. This study was registered at http://www.clinicaltrials.gov reg. # NCT00506584.

  13. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  14. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    ERIC Educational Resources Information Center

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  15. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  16. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  17. Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.

    PubMed

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A

    2013-11-01

    To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.

  18. Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.

    PubMed

    Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier.

  19. High-Resolution Measurement of the Turbulent Frequency-Wavenumber Power Spectrum in a Laboratory Magnetosphere

    NASA Astrophysics Data System (ADS)

    Qian, T. M.; Mauel, M. E.

    2017-10-01

    In a laboratory magnetosphere, plasma is confined by a strong dipole magnet, where interchange and entropy mode turbulence can be studied and controlled in near steady-state conditions. Whole-plasma imaging shows turbulence dominated by long wavelength modes having chaotic amplitudes and phases. Here, we report for the first time, high-resolution measurement of the frequency-wavenumber power spectrum by applying the method of Capon to simultaneous multi-point measurement of electrostatic entropy modes using an array of floating potential probes. Unlike previously reported measurements in which ensemble correlation between two probes detected only the dominant wavenumber, Capon's ``maximum likelihood method'' uses all available probes to produce a frequency-wavenumber spectrum, showing the existence of modes propagating in both electron and ion magnetic drift directions. We also discuss the wider application of this technique to laboratory and magnetospheric plasmas with simultaneous multi-point measurements. Supported by NSF-DOE Partnership in Plasma Science Grant DE-FG02-00ER54585.

  20. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    PubMed Central

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  1. Simultaneous skull-stripping and lateral ventricle segmentation via fast multi-atlas likelihood fusion

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoying; Kutten, Kwame; Ceritoglu, Can; Mori, Susumu; Miller, Michael I.

    2015-03-01

    In this paper, we propose and validate a fully automated pipeline for simultaneous skull-stripping and lateral ventricle segmentation using T1-weighted images. The pipeline is built upon a segmentation algorithm entitled fast multi-atlas likelihood-fusion (MALF) which utilizes multiple T1 atlases that have been pre-segmented into six whole-brain labels - the gray matter, the white matter, the cerebrospinal fluid, the lateral ventricles, the skull, and the background of the entire image. This algorithm, MALF, was designed for estimating brain anatomical structures in the framework of coordinate changes via large diffeomorphisms. In the proposed pipeline, we use a variant of MALF to estimate those six whole-brain labels in the test T1-weighted image. The three tissue labels (gray matter, white matter, and cerebrospinal fluid) and the lateral ventricles are then grouped together to form a binary brain mask to which we apply morphological smoothing so as to create the final mask for brain extraction. For computational purposes, all input images to MALF are down-sampled by a factor of two. In addition, small deformations are used for the changes of coordinates. This substantially reduces the computational complexity, hence we use the term "fast MALF". The skull-stripping performance is qualitatively evaluated on a total of 486 brain scans from a longitudinal study on Alzheimer dementia. Quantitative error analysis is carried out on 36 scans for evaluating the accuracy of the pipeline in segmenting the lateral ventricle. The volumes of the automated lateral ventricle segmentations, obtained from the proposed pipeline, are compared across three different clinical groups. The ventricle volumes from our pipeline are found to be sensitive to the diagnosis.

  2. Serum DHEA and Its Sulfate Are Associated With Incident Fall Risk in Older Men: The MrOS Sweden Study.

    PubMed

    Ohlsson, Claes; Nethander, Maria; Karlsson, Magnus K; Rosengren, Björn E; Ribom, Eva; Mellström, Dan; Vandenput, Liesbeth

    2018-03-12

    The adrenal-derived hormones dehydroepiandrosterone (DHEA) and its sulfate (DHEAS) are the most abundant circulating hormones and their levels decline substantially with age. Many of the actions of DHEAS are considered to be mediated through metabolism into androgens and estrogens in peripheral target tissues. The predictive value of serum DHEA and DHEAS for the likelihood of falling is unknown. The aim of this study was, therefore, to assess the associations between baseline DHEA and DHEAS levels and incident fall risk in a large cohort of older men. Serum DHEA and DHEAS levels were analyzed with mass spectrometry in the population-based Osteoporotic Fractures in Men study in Sweden (n = 2516, age 69 to 81 years). Falls were ascertained every 4 months by mailed questionnaires. Associations between steroid hormones and falls were estimated by generalized estimating equations. During a mean follow-up of 2.7 years, 968 (38.5%) participants experienced a fall. High serum levels of both DHEA (odds ratio [OR] per SD increase 0.85; 95% CI, 0.78 to 0.92) and DHEAS (OR 0.88, 95% CI, 0.81 to 0.95) were associated with a lower incident fall risk in models adjusted for age, BMI, and prevalent falls. Further adjustment for serum sex steroids or age-related comorbidities only marginally attenuated the associations between DHEA or DHEAS and the likelihood of falling. Moreover, the point estimates for DHEA and DHEAS were only slightly reduced after adjustment for lean mass and/or grip strength. Also, the addition of the narrow walk test did not substantially alter the associations between serum DHEA or DHEAS and fall risk. Finally, the association with incident fall risk remained significant for DHEA but not for DHEAS after simultaneous adjustment for lean mass, grip strength, and the narrow walk test. This suggests that the associations between DHEA and DHEAS and falls are only partially mediated via muscle mass, muscle strength, and/or balance. In conclusion, older men with high DHEA or DHEAS levels have a lesser likelihood of a fall. © 2018 American Society for Bone and Mineral Research. © 2018 American Society for Bone and Mineral Research.

  3. Measuring and partitioning the high-order linkage disequilibrium by multiple order Markov chains.

    PubMed

    Kim, Yunjung; Feng, Sheng; Zeng, Zhao-Bang

    2008-05-01

    A map of the background levels of disequilibrium between nearby markers can be useful for association mapping studies. In order to assess the background levels of linkage disequilibrium (LD), multilocus LD measures are more advantageous than pairwise LD measures because the combined analysis of pairwise LD measures is not adequate to detect simultaneous allele associations among multiple markers. Various multilocus LD measures based on haplotypes have been proposed. However, most of these measures provide a single index of association among multiple markers and does not reveal the complex patterns and different levels of LD structure. In this paper, we employ non-homogeneous, multiple order Markov Chain models as a statistical framework to measure and partition the LD among multiple markers into components due to different orders of marker associations. Using a sliding window of multiple markers on phased haplotype data, we compute corresponding likelihoods for different Markov Chain (MC) orders in each window. The log-likelihood difference between the lowest MC order model (MC0) and the highest MC order model in each window is used as a measure of the total LD or the overall deviation from the gametic equilibrium for the window. Then, we partition the total LD into lower order disequilibria and estimate the effects from two-, three-, and higher order disequilibria. The relationship between different orders of LD and the log-likelihood difference involving two different orders of MC models are explored. By applying our method to the phased haplotype data in the ENCODE regions of the HapMap project, we are able to identify high/low multilocus LD regions. Our results reveal that the most LD in the HapMap data is attributed to the LD between adjacent pairs of markers across the whole region. LD between adjacent pairs of markers appears to be more significant in high multilocus LD regions than in low multilocus LD regions. We also find that as the multilocus total LD increases, the effects of high-order LD tends to get weaker due to the lack of observed multilocus haplotypes. The overall estimates of first, second, third, and fourth order LD across the ENCODE regions are 64, 23, 9, and 3%.

  4. SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins-Fekete, C; Centre Hospitalier University de Quebec, Quebec, QC; Mass General Hospital

    2016-06-15

    Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. Anmore » anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The proposed MLLSE shows promising potential to increase the spatial resolution (up to 244%) in proton imaging.« less

  5. Pepsin in saliva for the diagnosis of gastro-oesophageal reflux disease.

    PubMed

    Hayat, Jamal O; Gabieta-Somnez, Shirley; Yazaki, Etsuro; Kang, Jin-Yong; Woodcock, Andrew; Dettmar, Peter; Mabary, Jerry; Knowles, Charles H; Sifrim, Daniel

    2015-03-01

    Current diagnostic methods for gastro-oesophageal reflux disease (GORD) have moderate sensitivity/specificity and can be invasive and expensive. Pepsin detection in saliva has been proposed as an 'office-based' method for GORD diagnosis. The aims of this study were to establish normal values of salivary pepsin in healthy asymptomatic subjects and to determine its value to discriminate patients with reflux-related symptoms (GORD, hypersensitive oesophagus (HO)) from functional heartburn (FH). 100 asymptomatic controls and 111 patients with heartburn underwent MII-pH monitoring and simultaneous salivary pepsin determination on waking, after lunch and dinner. Cut-off value for pepsin positivity was 16 ng/mL. Patients were divided into GORD (increased acid exposure time (AET), n=58); HO (normal AET and + Symptom Association Probability (SAP), n=26) and FH (normal AET and-SAP, n=27). 1/3 of asymptomatic subjects had pepsin in saliva at low concentration (0(0-59)ng/mL). Patients with GORD and HO had higher prevalence and pepsin concentration than controls (HO, 237(52-311)ng/mL and GORD, 121(29-252)ng/mL)(p<0.05). Patients with FH had low prevalence and concentration of pepsin in saliva (0(0-40) ng/mL). A positive test had 78.6% sensitivity and 64.9% specificity for diagnosis of GORD+HO (likelihood ratio: 2.23). However, one positive sample with >210 ng/mL pepsin suggested presence of GORD+HO with 98.2% specificity (likelihood ratio: 25.1). Only 18/84 (21.4%) of GORD+HO patients had 3 negative samples. In patients with symptoms suggestive of GORD, salivary pepsin testing may complement questionnaires to assist office-based diagnosis. This may lessen the use of unnecessary antireflux therapy and the need for further invasive and expensive diagnostic methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. Large-area measurements of CIB power spectra with Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, D. S. Y.; Challinor, A.; Efstathiou, G.; Lagache, G.

    We present new measurements of the power spectra of the cosmic infrared background (CIB) anisotropies using the Planck 2015 full-mission HFI data at 353, 545, and 857 GHz over 20 000 square degrees. Unlike previous Planck measurements of the CIB power spectra, we do not rely on external HI data to remove Galactic dust emission from the Planck maps. Instead, we model the Galactic emission at the level of the power spectra, using templates constructed directly from the Planck data by exploiting the statistical isotropy of all extragalactic emission components. This allows us to work at the full resolution of Planck over large sky areas. We construct a likelihood based on the measured spectra (for multipoles 50 <= l <= 2500) using analytic covariance matrices that account for masking and the realistic instrumental noise properties. The results of an MCMC exploration of this likelihood are presented, based on simple parameterised models of the CIB power that arises from clustering of infrared galaxies. We explore simultaneously the parameters describing the clustered power, the Poisson power levels, and the amplitudes of the Galactic power spectrum templates across the six frequency (cross-)spectra. The best-fit model provides a good fit to all spectra. As an example, Fig. 1 compares the measured auto spectra at 353, 545, and 857 GHz over 40% of the sky to the power in the best-fit model. We find that the power in the CIB anisotropies from galaxy clustering is roughly equal to the Poisson power at multipoles l =2000 (the clustered power dominates on larger scales), and that our dust-cleaned CIB spectra are in good agreement with previous Planck and Herschel measurements. A key feature of our analysis is that it allows one to make many internal consistency tests. We show that our results are stable to data selection and choice of survey area, demonstrating both our ability to remove Galactic dust power to high accuracy and the statistical isotropy of the CIB signal.

  7. Which Statistic Should Be Used to Detect Item Preknowledge When the Set of Compromised Items Is Known?

    PubMed

    Sinharay, Sandip

    2017-09-01

    Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.

  8. Likelihood-Ratio DIF Testing: Effects of Nonnormality

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…

  9. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  10. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  11. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  12. Evaluating Fast Maximum Likelihood-Based Phylogenetic Programs Using Empirical Phylogenomic Data Sets

    PubMed Central

    Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd

    2018-01-01

    Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474

  13. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  15. Organic rankine cycle system for use with a reciprocating engine

    DOEpatents

    Radcliff, Thomas D.; McCormick, Duane; Brasz, Joost J.

    2006-01-17

    In a waste heat recovery system wherein an organic rankine cycle system uses waste heat from the fluids of a reciprocating engine, provision is made to continue operation of the engine even during periods when the organic rankine cycle system is inoperative, by providing an auxiliary pump and a bypass for the refrigerant flow around the turbine. Provision is also made to divert the engine exhaust gases from the evaporator during such periods of operation. In one embodiment, the auxiliary pump is made to operate simultaneously with the primary pump during normal operations, thereby allowing the primary pump to operate at lower speeds with less likelihood of cavitation.

  16. Precision cosmology from X-ray AGN clustering

    NASA Astrophysics Data System (ADS)

    Basilakos, Spyros; Plionis, Manolis

    2009-11-01

    We place tight constraints on the main cosmological parameters of spatially flat cosmological models by using the recent angular clustering results of XMM-Newton soft (0.5-2keV) X-ray sources, which have a redshift distribution with a median of z ~ 1. Performing a standard likelihood procedure, assuming a constant in comoving coordinates active galactic nuclei (AGN) clustering evolution, the AGN bias evolution model of Basilakos, Plionis & Ragone-Figueroa and the Wilkinson Microwave Anisotropy Probe5 value of σ8, we find stringent simultaneous constraints in the (Ωm, w) plane, with Ωm = 0.26 +/- 0.05, w = -0.93+0.11-0.19.

  17. Pediatric Status Epilepticus Management

    PubMed Central

    Abend, Nicholas S; Loddenkemper, Tobias

    2014-01-01

    Purpose of Review This review discusses management of status epilepticus in children including both anticonvulsant medications and overall management approaches. Recent Findings Rapid management of status epilepticus is associated with a greater likelihood of seizure termination and better outcomes, yet data indicate there are often management delays. This review discusses an overall management approach aiming to simultaneously identify and manage underlying precipitant etiologies, administer anticonvulsants in rapid succession until seizures have terminated, and identify and manage systemic complications. An example management pathway is provided. Summary Status epilepticus is a common neurologic emergency in children and requires rapid intervention. Having a predetermined status epilepticus management pathway can expedite management. PMID:25304961

  18. Low Yield of Paired Head and Cervical Spine Computed Tomography in Blunt Trauma Evaluation.

    PubMed

    Graterol, Joseph; Beylin, Maria; Whetstone, William D; Matzoll, Ashleigh; Burke, Rennie; Talbott, Jason; Rodriguez, Robert M

    2018-06-01

    With increased computed tomography (CT) utilization, clinicians may simultaneously order head and neck CT scans, even when injury is suspected only in one region. We sought to determine: 1) the frequency of simultaneous ordering of a head CT scan when a neck CT scan is ordered; 2) the yields of simultaneously ordered head and neck CT scans for clinically significant injury (CSI); and 3) whether injury in one region is associated with a higher rate of injury in the other. This was a retrospective study of all adult patients who received neck CT scans (and simultaneously ordered head CT scans) as part of their blunt trauma evaluation at an urban level 1 trauma center in 2013. An expert panel determined CSI of head and neck injuries. We defined yield as number of patients with injury/number of patients who had a CT scan. Of 3223 patients who met inclusion criteria, 2888 (89.6%) had simultaneously ordered head and neck CT scans. CT yield for CSI in both the head and neck was 0.5% (95% confidence interval [CI] 0.3-0.8%), and the yield for any injury in both the head and neck was 1.4% (95% CI 1.0-1.8%). The yield for CSI in one region was higher when CSI was seen in the other region. The yield of CT for CSI in both the head and neck concomitantly is very low. When injury is seen in one region, there is higher likelihood of injury in the other. These findings argue against paired ordering of head and neck CT scans and suggest that CT scans should be ordered individually or when injury is detected in one region. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  20. Empirical likelihood-based tests for stochastic ordering

    PubMed Central

    BARMI, HAMMOU EL; MCKEAGUE, IAN W.

    2013-01-01

    This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142

  1. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    USGS Publications Warehouse

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  2. Statistical modelling of growth using a mixed model with orthogonal polynomials.

    PubMed

    Suchocki, T; Szyda, J

    2011-02-01

    In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.

  3. Clinical correlates of co-occurring cannabis and tobacco use: a systematic review.

    PubMed

    Peters, Erica N; Budney, Alan J; Carroll, Kathleen M

    2012-08-01

      A growing literature has documented the substantial prevalence of and putative mechanisms underlying co-occurring (i.e. concurrent or simultaneous) cannabis and tobacco use. Greater understanding of the clinical correlates of co-occurring cannabis and tobacco use may suggest how intervention strategies may be refined to improve cessation outcomes and decrease the public health burden associated with cannabis and tobacco use.   A systematic review of the literature on clinical diagnoses, psychosocial problems and outcomes associated with co-occurring cannabis and tobacco use. Twenty-eight studies compared clinical correlates in co-occurring cannabis and tobacco users versus cannabis- or tobacco-only users. These included studies of treatment-seekers in clinical trials and non-treatment-seekers in cross-sectional or longitudinal epidemiological or non-population-based surveys.   Sixteen studies examined clinical diagnoses, four studies examined psychosocial problems and 11 studies examined cessation outcomes in co-occurring cannabis and tobacco users (several studies examined multiple clinical correlates). Relative to cannabis use only, co-occurring cannabis and tobacco use was associated with a greater likelihood of cannabis use disorders, more psychosocial problems and poorer cannabis cessation outcomes. Relative to tobacco use only, co-occurring use did not appear to be associated consistently with a greater likelihood of tobacco use disorders, more psychosocial problems or poorer tobacco cessation outcomes.   Cannabis users who also smoke tobacco are more dependent on cannabis, have more psychosocial problems and have poorer cessation outcomes than those who use cannabis but not tobacco. The converse does not appear to be the case. © 2012 The Authors, Addiction © 2012 Society for the Study of Addiction.

  4. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  5. Are hospitals "keeping up with the Joneses"?: Assessing the spatial and temporal diffusion of the surgical robot.

    PubMed

    Li, Huilin; Gail, Mitchell H; Braithwaite, R Scott; Gold, Heather T; Walter, Dawn; Liu, Mengling; Gross, Cary P; Makarov, Danil V

    2014-07-01

    The surgical robot has been widely adopted in the United States in spite of its high cost and controversy surrounding its benefit. Some have suggested that a "medical arms race" influences technology adoption. We wanted to determine whether a hospital would acquire a surgical robot if its nearest neighboring hospital already owned one. We identified 554 hospitals performing radical prostatectomy from the Healthcare Cost and Utilization Project Statewide Inpatient Databases for seven states. We used publicly available data from the website of the surgical robot's sole manufacturer (Intuitive Surgical, Sunnyvale, CA) combined with data collected from the hospitals to ascertain the timing of robot acquisition during year 2001 to 2008. One hundred thirty four hospitals (24%) had acquired a surgical robot by the end of 2008. We geocoded the address of each hospital and determined a hospital's likelihood to acquire a surgical robot based on whether its nearest neighbor owned a surgical robot . We developed a Markov chain method to model the acquisition process spatially and temporally and quantified the "neighborhood effect" on the acquisition of the surgical robot while adjusting simultaneously for known confounders. After adjusting for hospital teaching status, surgical volume, urban status and number of hospital beds, the Markov chain analysis demonstrated that a hospital whose nearest neighbor had acquired a surgical robot had a higher likelihood itself acquiring a surgical robot. (OR=1.71, 95% CI: 1.07-2.72 , p=0.02). There is a significant spatial and temporal association for hospitals acquiring surgical robots during the study period. Hospitals were more likely to acquire a surgical robot during the robot's early adoption phase if their nearest neighbor had already done so.

  6. Firearm Ownership and Acquisition Among Parents With Risk Factors for Self-Harm or Other Violence.

    PubMed

    Ladapo, Joseph A; Elliott, Marc N; Kanouse, David E; Schwebel, David C; Toomey, Sara L; Mrug, Sylvie; Cuccaro, Paula M; Tortolero, Susan R; Schuster, Mark A

    Recent policy initiatives aiming to reduce firearm morbidity focus on mental health and illness. However, few studies have simultaneously examined mental health and behavioral predictors within families, or their longitudinal association with newly acquiring a firearm. Population-based, longitudinal survey of 4251 parents of fifth-grade students in 3 US metropolitan areas; 2004 to 2011. Multivariate logistic models were used to assess associations between owning or acquiring a firearm and parent mental illness and substance use. Ninety-three percent of parents interviewed were women. Overall, 19.6% of families reported keeping a firearm in the home. After adjustment for confounders, history of depression (adjusted odds ratio [aOR], 1.36; 95% confidence interval [CI], 1.04-1.77), binge drinking (aOR 1.75; 95% CI, 1.14-2.68), and illicit drug use (aOR 1.75; 95% CI, 1.12-2.76) were associated with a higher likelihood of keeping a firearm in the home. After a mean of 3.1 years, 6.1% of parents who did not keep a firearm in the home at baseline acquired one by follow-up and kept it in the home (average annual likelihood = 2.1%). No risk factors for self-harm or other violence were associated with newly acquiring a gun in the home. Families with risk factors for self-harm or other violence have a modestly greater probability of having a firearm in the home compared with families without risk factors, and similar probability of newly acquiring a firearm. Treatment interventions for many of these risk factors might reduce firearm-related morbidity. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  7. Decoding fMRI events in sensorimotor motor network using sparse paradigm free mapping and activation likelihood estimates.

    PubMed

    Tan, Francisca M; Caballero-Gaudes, César; Mullinger, Karen J; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L; Francis, Susan T; Gowland, Penny A

    2017-11-01

    Most functional MRI (fMRI) studies map task-driven brain activity using a block or event-related paradigm. Sparse paradigm free mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information, but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of activation likelihood estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the sensorimotor network (SMN) to six motor functions (left/right fingers, left/right toes, swallowing, and eye blinks). We validated the framework using simultaneous electromyography (EMG)-fMRI experiments and motor tasks with short and long duration, and random interstimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events were 77 ± 13% and 74 ± 16%, respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55% and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this article discusses methodological implications and improvements to increase the decoding performance. Hum Brain Mapp 38:5778-5794, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Decoding fMRI events in Sensorimotor Motor Network using Sparse Paradigm Free Mapping and Activation Likelihood Estimates

    PubMed Central

    Tan, Francisca M.; Caballero-Gaudes, César; Mullinger, Karen J.; Cho, Siu-Yeung; Zhang, Yaping; Dryden, Ian L.; Francis, Susan T.; Gowland, Penny A.

    2017-01-01

    Most fMRI studies map task-driven brain activity using a block or event-related paradigm. Sparse Paradigm Free Mapping (SPFM) can detect the onset and spatial distribution of BOLD events in the brain without prior timing information; but relating the detected events to brain function remains a challenge. In this study, we developed a decoding method for SPFM using a coordinate-based meta-analysis method of Activation Likelihood Estimation (ALE). We defined meta-maps of statistically significant ALE values that correspond to types of events and calculated a summation overlap between the normalized meta-maps and SPFM maps. As a proof of concept, this framework was applied to relate SPFM-detected events in the Sensorimotor Network (SMN) to six motor function (left/right fingers, left/right toes, swallowing and eye blinks). We validated the framework using simultaneous Electromyography-fMRI experiments and motor tasks with short and long duration, and random inter-stimulus interval. The decoding scores were considerably lower for eye movements relative to other movement types tested. The average successful rate for short and long motor events was 77 ± 13% and 74 ± 16% respectively, excluding eye movements. We found good agreement between the decoding results and EMG for most events and subjects, with a range in sensitivity between 55 and 100%, excluding eye movements. The proposed method was then used to classify the movement types of spontaneous single-trial events in the SMN during resting state, which produced an average successful rate of 22 ± 12%. Finally, this paper discusses methodological implications and improvements to increase the decoding performance. PMID:28815863

  9. Modifiable predictors of insufficient sleep durations: A longitudinal analysis of youth in the COMPASS study.

    PubMed

    Patte, Karen A; Qian, Wei; Leatherdale, Scott T

    2018-01-01

    The purpose of the current study was to simultaneously examine commonly proposed risk and protective factors for sleep deprivation over time among a large cohort of Ontario and Alberta secondary school students. Using 4-year linked longitudinal data from youth in years 1 through 4 (Y 1 [2012/2013], Y 2 [2013/2014], Y 3 [2014/2015], Y 4 [2015/2016]) of the COMPASS study (n=26,205), the likelihood of students meeting contemporary sleep recommendations was tested based on their self-reported substance use, bullying victimization, physical activity, and homework and screen time. Models controlled for the effect of student-reported gender, race/ethnicity, grade, school clustering, and all other predictor variables. Relative to baseline, students became less likely to meet the sleep recommendations if at follow-up they had initiated binge drinking, experienced cyber bullying victimization, or were spending more time doing homework, with other factors held constant. The likelihood of reporting sufficient sleep increased if students had begun engaging in resistance training at least three times a week. No longitudinal effect was observed when students increased their caffeine consumption (energy drinks, coffee/tea), initiated cannabis or tobacco use, experienced other forms of bullying victimization (physical, verbal, or belongings), engaged in more moderate-vigorous physical activity, or increased their screen use of any type. Few of the commonly purported modifiable risk and protective factors for youth sleep deprivation held in multinomial longitudinal analyses. Causal conclusions appear premature, with further research required to confirm the targets likely to be most effective in assisting more youth in meeting the sleep recommendations. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  11. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  12. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  13. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  14. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    ERIC Educational Resources Information Center

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  15. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  16. Workplace air measurements and likelihood of exposure to manufactured nano-objects, agglomerates, and aggregates

    NASA Astrophysics Data System (ADS)

    Brouwer, Derk H.; van Duuren-Stuurman, Birgit; Berges, Markus; Bard, Delphine; Jankowska, Elzbieta; Moehlmann, Carsten; Pelzer, Johannes; Mark, Dave

    2013-11-01

    Manufactured nano-objects, agglomerates, and aggregates (NOAA) may have adverse effect on human health, but little is known about occupational risks since actual estimates of exposure are lacking. In a large-scale workplace air-monitoring campaign, 19 enterprises were visited and 120 potential exposure scenarios were measured. A multi-metric exposure assessment approach was followed and a decision logic was developed to afford analysis of all results in concert. The overall evaluation was classified by categories of likelihood of exposure. At task level about 53 % showed increased particle number or surface area concentration compared to "background" level, whereas 72 % of the TEM samples revealed an indication that NOAA were present in the workplace. For 54 out of the 120 task-based exposure scenarios, an overall evaluation could be made based on all parameters of the decision logic. For only 1 exposure scenario (approximately 2 %), the highest level of potential likelihood was assigned, whereas in total in 56 % of the exposure scenarios the overall evaluation revealed the lowest level of likelihood. However, for the remaining 42 % exposure to NOAA could not be excluded.

  17. A model for evidence accumulation in the lexical decision task.

    PubMed

    Wagenmakers, Eric-Jan; Steyvers, Mark; Raaijmakers, Jeroen G W; Shiffrin, Richard M; van Rijn, Hedderik; Zeelenberg, René

    2004-05-01

    We present a new model for lexical decision, REM-LD, that is based on REM theory (e.g., ). REM-LD uses a principled (i.e., Bayes' rule) decision process that simultaneously considers the diagnosticity of the evidence for the 'WORD' response and the 'NONWORD' response. The model calculates the odds ratio that the presented stimulus is a word or a nonword by averaging likelihood ratios for lexical entries from a small neighborhood of similar words. We report two experiments that used a signal-to-respond paradigm to obtain information about the time course of lexical processing. Experiment 1 verified the prediction of the model that the frequency of the word stimuli affects performance for nonword stimuli. Experiment 2 was done to study the effects of nonword lexicality, word frequency, and repetition priming and to demonstrate how REM-LD can account for the observed results. We discuss how REM-LD could be extended to account for effects of phonology such as the pseudohomophone effect, and how REM-LD can predict response times in the traditional 'respond-when-ready' paradigm.

  18. Constraints on the Higgs boson width from off-shell production and decay to Z-boson pairs

    DOE PAGES

    Khachatryan, Vardan

    2014-07-03

    Constraints are presented on the total width of the recently discovered Higgs boson, Γ H, using its relative on-shell and off-shell production and decay rates to a pair of Z bosons, where one Z boson decays to an electron or muon pair, and the other to an electron, muon, or neutrino pair. Our analysis is based on the data collected by the CMS experiment at the LHC in 2011 and 2012, corresponding to integrated luminosities of 5.1 fb -1 at a center-of-mass energy √s = 7 TeV and 19.7 fb -1 at √s = 8 TeV. Finally, a simultaneous maximummore » likelihood fit to the measured kinematic distributions near the resonance peak and above the Z-boson pair production threshold leads to an upper limit on the Higgs boson width of Γ H<22 MeV at a 95% confidence level, which is 5.4 times the expected value in the standard model at the measured mass of m H=125.6 GeV.« less

  19. Identifying influential neighbors in animal flocking

    PubMed Central

    Jiang, Li; Giuggioli, Luca; Escobedo, Ramón; Sire, Clément; Han, Zhangang

    2017-01-01

    Schools of fish and flocks of birds can move together in synchrony and decide on new directions of movement in a seamless way. This is possible because group members constantly share directional information with their neighbors. Although detecting the directionality of other group members is known to be important to maintain cohesion, it is not clear how many neighbors each individual can simultaneously track and pay attention to, and what the spatial distribution of these influential neighbors is. Here, we address these questions on shoals of Hemigrammus rhodostomus, a species of fish exhibiting strong schooling behavior. We adopt a data-driven analysis technique based on the study of short-term directional correlations to identify which neighbors have the strongest influence over the participation of an individual in a collective U-turn event. We find that fish mainly react to one or two neighbors at a time. Moreover, we find no correlation between the distance rank of a neighbor and its likelihood to be influential. We interpret our results in terms of fish allocating sequential and selective attention to their neighbors. PMID:29161269

  20. Constraints on the Higgs boson width from off-shell production and decay to Z-boson pairs

    NASA Astrophysics Data System (ADS)

    Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Fabjan, C.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Taurok, A.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Bansal, M.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Luyckx, S.; Ochesanu, S.; Roland, B.; Rougny, R.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D'Hondt, J.; Daci, N.; Heracleous, N.; Keaveney, J.; Lowette, S.; Maes, M.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Dobur, D.; Favart, L.; Gay, A. P. R.; Grebenyuk, A.; Léonard, A.; Mohammadi, A.; Perniè, L.; Reis, T.; Seva, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Adler, V.; Beernaert, K.; Benucci, L.; Cimmino, A.; Costantini, S.; Crucy, S.; Dildick, S.; Fagot, A.; Garcia, G.; Mccartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Salva Diblen, S.; Sigamani, M.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bruno, G.; Castello, R.; Caudron, A.; Ceard, L.; Da Silveira, G. G.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jez, P.; Komm, M.; Lemaitre, V.; Nuttens, C.; Pagano, D.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Dos Reis Martins, T.; Pol, M. E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santaolalla, J.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Bernardes, C. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Aleksandrov, A.; Genchev, V.; Iaydjiev, P.; Marinov, A.; Piperov, S.; Rodozov, M.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Hadjiiska, R.; Kozhuharov, V.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Du, R.; Jiang, C. H.; Liang, D.; Liang, S.; Plestina, R.; Tao, J.; Wang, X.; Wang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Guo, Y.; Li, Q.; Li, W.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Zhang, L.; Zou, W.; Avila, C.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Mekterovic, D.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Bodlak, M.; Finger, M.; Finger, M.; Assran, Y.; Ellithi Kamel, A.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Busson, P.; Charlot, C.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Machet, M.; Mastrolorenzo, L.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Paganini, P.; Salerno, R.; Sauvan, J. b.; Sirois, Y.; Veelken, C.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Chabert, E. C.; Collard, C.; Conte, E.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Beaupere, N.; Boudoul, G.; Bouvier, E.; Brochet, S.; Carrillo Montoya, C. A.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Xiao, H.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Bontenackels, M.; Edelhoff, M.; Feld, L.; Hindrichs, O.; Klein, K.; Ostapchuk, A.; Perieanu, A.; Raupach, F.; Sammet, J.; Schael, S.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Weber, M.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Lingemann, J.; Nowack, A.; Nugent, I. M.; Perchalla, L.; Pooth, O.; Stahl, A.; Asin, I.; Bartosik, N.; Behr, J.; Behrenhoff, W.; Behrens, U.; Bell, A. J.; Bergholz, M.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Garay Garcia, J.; Geiser, A.; Gunnellini, P.; Hauk, J.; Hellwig, G.; Hempel, M.; Horton, D.; Jung, H.; Kalogeropoulos, A.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Krücker, D.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Novgorodova, O.; Nowak, F.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Ribeiro Cipriano, P. M.; Ron, E.; Sahin, M. Ö.; Salfeld-Nebgen, J.; Saxena, P.; Schmidt, R.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Vargas Trevino, A. D. R.; Walsh, R.; Wissing, C.; Aldaya Martin, M.; Blobel, V.; Centis Vignali, M.; Draeger, A. r.; Erfle, J.; Garutti, E.; Goebel, K.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Kirschenmann, H.; Klanner, R.; Kogler, R.; Lange, J.; Lapsien, T.; Lenz, T.; Marchesini, I.; Ott, J.; Peiffer, T.; Pietsch, N.; Pöhlsen, T.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Seidel, M.; Sibille, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Troendle, D.; Usai, E.; Vanelderen, L.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; De Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Frensch, F.; Giffels, M.; Hartmann, F.; Hauth, T.; Husemann, U.; Katkov, I.; Kornmayer, A.; Kuznetsova, E.; Lobelle Pardo, P.; Mozer, M. U.; Müller, Th.; Nürnberg, A.; Quast, G.; Rabbertz, K.; Ratnikov, F.; Röcker, S.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Markou, A.; Markou, C.; Psallidas, A.; Topsis-Giotis, I.; Panagiotou, A.; Saoulidou, N.; Stiliaris, E.; Aslanoglou, X.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Swain, S. K.; Beri, S. B.; Bhatnagar, V.; Dhingra, N.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, M.; Mittal, M.; Nishu, N.; Singh, J. B.; Kumar, Ashok; Kumar, Arun; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Modak, A.; Mukherjee, S.; Roy, D.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kailas, S.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Bhowmik, S.; Chatterjee, R. M.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Banerjee, S.; Dewanjee, R. K.; Dugad, S.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Jafari, A.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Barbone, L.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Selvaggi, G.; Silvestris, L.; Singh, G.; Venditti, R.; Verwilligen, P.; Zito, G.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gallo, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Tosi, S.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Lucchini, M. T.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dorigo, T.; Dosselli, U.; Galanti, M.; Gasparini, F.; Gasparini, U.; Giubilato, P.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Riccardi, C.; Salvini, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Romeo, F.; Saha, A.; Santocchia, A.; Spiezia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fiori, F.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Moon, C. S.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Vernieri, C.; Barone, L.; Cavallari, F.; D'imperio, G.; Del Re, D.; Diemoz, M.; Grassi, M.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Micheli, F.; Nourbakhsh, S.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Soffi, L.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Degano, A.; Demaria, N.; Finco, L.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Ortona, G.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Montanino, D.; Schizzi, A.; Umer, T.; Zanetti, A.; Kim, T. J.; Chang, S.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Park, H.; Sakharov, A.; Son, D. C.; Kim, J. Y.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K. S.; Park, S. K.; Roh, Y.; Choi, M.; Kim, J. H.; Park, I. C.; Park, S.; Ryu, G.; Ryu, M. S.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Seo, H.; Yu, I.; Juodagalvis, A.; Komaragiri, J. R.; Md Ali, M. A. B.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Casimiro Linares, E.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Reucroft, S.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khalid, S.; Khan, W. A.; Khurshid, T.; Shah, M. A.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Wolszczak, W.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Safronov, G.; Semenov, S.; Spiridonov, A.; Stolin, V.; Vlasov, E.; Zhokin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Merino, G.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Lloret Iglesias, L.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Duarte Campderros, J.; Fernandez, M.; Gomez, G.; Graziano, A.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Bernet, C.; Bianchi, G.; Bloch, P.; Bocci, A.; Bonato, A.; Bondu, O.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Colafranceschi, S.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; David, A.; De Guio, F.; De Roeck, A.; De Visscher, S.; Dobson, M.; Dordevic, M.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Eugster, J.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Hansen, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Magini, N.; Malgeri, L.; Mannelli, M.; Marrouche, J.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Musella, P.; Orsini, L.; Pape, L.; Perez, E.; Perrozzi, L.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Pimiä, M.; Piparo, D.; Plagge, M.; Racz, A.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Steggemann, J.; Stieger, B.; Stoye, M.; Treille, D.; Tsirou, A.; Veres, G. I.; Vlimant, J. R.; Wardle, N.; Wöhri, H. K.; Wollny, H.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Bortignon, P.; Buchmann, M. A.; Casal, B.; Chanon, N.; Deisher, A.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Grab, C.; Hits, D.; Lustermann, W.; Mangano, B.; Marini, A. C.; Martinez Ruiz del Arbol, P.; Meister, D.; Mohr, N.; Nägeli, C.; Nessi-Tedaldi, F.; Pandolfi, F.; Pauss, F.; Peruzzi, M.; Quittnat, M.; Rebane, L.; Rossini, M.; Starodumov, A.; Takahashi, M.; Theofilatos, K.; Wallny, R.; Weber, H. A.; Amsler, C.; Canelli, M. F.; Chiochia, V.; De Cosa, A.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Millan Mejias, B.; Ngadiuba, J.; Robmann, P.; Ronga, F. J.; Taroni, S.; Verzetti, M.; Yang, Y.; Cardaci, M.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Volpe, R.; Yu, S. S.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Grundler, U.; Hou, W.-S.; Kao, K. Y.; Lei, Y. J.; Liu, Y. F.; Lu, R.-S.; Majumder, D.; Petrakou, E.; Tzeng, Y. M.; Wilken, R.; Asavapibhop, B.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sogut, K.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Akin, I. V.; Bilin, B.; Bilmis, S.; Gamsizkan, H.; Karapinar, G.; Ocalan, K.; Sekmen, S.; Surat, U. E.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Isildak, B.; Kaya, M.; Kaya, O.; Bahtiyar, H.; Barlas, E.; Cankocak, K.; Vardarlı, F. I.; Yücel, M.; Levchuk, L.; Sorokin, P.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Frazier, R.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Senkin, S.; Smith, V. J.; Williams, T.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Burton, D.; Colling, D.; Cripps, N.; Cutajar, M.; Dauncey, P.; Davies, G.; Della Negra, M.; Dunne, P.; Ferguson, W.; Fulcher, J.; Futyan, D.; Gilbert, A.; Hall, G.; Iles, G.; Jarvis, M.; Karapostoli, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mathias, B.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Rogerson, S.; Rose, A.; Seez, C.; Sharp, P.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Martin, W.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Scarborough, T.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Heister, A.; Lawson, P.; Richardson, C.; Rohlf, J.; Sperka, D.; St. John, J.; Sulak, L.; Alimena, J.; Berry, E.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Segala, M.; Sinthuprasith, T.; Speer, T.; Swanson, J.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Miceli, T.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Searle, M.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Rakness, G.; Takasugi, E.; Valuev, V.; Weber, M.; Babb, J.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Rikova, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Liu, H.; Long, O. R.; Luthra, A.; Malberti, M.; Nguyen, H.; Olmedo Negrete, M.; Shrinivas, A.; Sumowidagdo, S.; Wimpenny, S.; Andrews, W.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Evans, D.; Holzner, A.; Kelley, R.; Klein, D.; Kovalskyi, D.; Lebourgeois, M.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Palmer, C.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Sudano, E.; Tu, Y.; Vartak, A.; Welke, C.; Würthwein, F.; Yagil, A.; Yoo, J.; Barge, D.; Bradmiller-Feld, J.; Campagnari, C.; Danielson, T.; Dishaw, A.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Incandela, J.; Justus, C.; Mccoll, N.; Richman, J.; Stuart, D.; To, W.; West, C.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Di Marco, E.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Rogan, C.; Spiropulu, M.; Timciuc, V.; Wilkinson, R.; Xie, S.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Ferguson, T.; Iiyama, Y.; Paulini, M.; Russ, J.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Skinnari, L.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Kaadze, K.; Klima, B.; Kreis, B.; Kwan, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Mishra, K.; Mrenna, S.; Musienko, Y.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitbeck, A.; Whitmore, J.; Yang, F.; Acosta, D.; Avery, P.; Bourilkov, D.; Carver, M.; Cheng, T.; Curry, D.; Das, S.; De Gruttola, M.; Di Giovanni, G. P.; Field, R. D.; Fisher, M.; Furic, I. K.; Hugon, J.; Konigsberg, J.; Korytov, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Rinkevicius, A.; Shchutska, L.; Skhirtladze, N.; Snowball, M.; Yelton, J.; Zakaria, M.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Hohlmann, M.; Kalakhety, H.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Bazterra, V. E.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Khalatyan, S.; Kurt, P.; Moon, D. H.; O'Brien, C.; Silkworth, C.; Turner, P.; Varelas, N.; Albayrak, E. A.; Bilki, B.; Clarida, W.; Dilsiz, K.; Duru, F.; Haytmyradov, M.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Rahmat, R.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yetkin, T.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Sarica, U.; Swartz, M.; Xiao, M.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Gray, J.; Kenny, R. P., III; Malek, M.; Murray, M.; Noonan, D.; Sanders, S.; Sekaric, J.; Stringer, R.; Wang, Q.; Wood, J. S.; Barfuss, A. F.; Chakaberia, I.; Ivanov, A.; Khalil, S.; Makouski, M.; Maravin, Y.; Saini, L. K.; Shrestha, S.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kolberg, T.; Lu, Y.; Marionneau, M.; Mignerey, A. C.; Pedro, K.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Bauer, G.; Busza, W.; Cali, I. A.; Chan, M.; Di Matteo, L.; Dutta, V.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Klute, M.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Ma, T.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Stephans, G. S. F.; Stöckli, F.; Sumorok, K.; Velicanu, D.; Veverka, J.; Wyslouch, B.; Yang, M.; Zanetti, M.; Zhukova, V.; Dahmes, B.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Pastika, N.; Rusack, R.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Gonzalez Suarez, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Lazo-Flores, J.; Malik, S.; Meier, F.; Snow, G. R.; Dolen, J.; Godshalk, A.; Iashvili, I.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Haley, J.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Trocino, D.; Wang, R. j.; Wood, D.; Zhang, J.; Hahn, K. A.; Kubik, A.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Velasco, M.; Won, S.; Brinkerhoff, A.; Chan, K. M.; Drozdetskiy, A.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Luo, W.; Lynch, S.; Marinelli, N.; Pearson, T.; Planer, M.; Ruchti, R.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Puigh, D.; Rodenburg, M.; Smith, G.; Winer, B. L.; Wolfe, H.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hebda, P.; Hunt, A.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zenz, S. C.; Zuranski, A.; Brownson, E.; Mendez, H.; Ramirez Vargas, J. E.; Alagoz, E.; Barnes, V. E.; Benedetti, D.; Bolla, G.; Bortoletto, D.; De Mattia, M.; Hu, Z.; Jha, M. K.; Jones, M.; Jung, K.; Kress, M.; Leonardo, N.; Lopes Pegna, D.; Maroussov, V.; Merkel, P.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Yoo, H. D.; Zablocki, J.; Zheng, Y.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Michlin, B.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Khukhunaishvili, A.; Petrillo, G.; Vishnevskiy, D.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Lungu, G.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Salur, S.; Schnetzer, S.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Krutelyov, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Rose, A.; Safonov, A.; Sakuma, T.; Suarez, I.; Tatarinov, A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kovitanggoon, K.; Kunori, S.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Wood, J.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Duric, S.; Friis, E.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Lazaridis, C.; Levine, A.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ross, I.; Sarangi, T.; Savin, A.; Smith, W. H.; Vuosalo, C.; Woods, N.; CMS Collaboration

    2014-09-01

    Constraints are presented on the total width of the recently discovered Higgs boson, ΓH, using its relative on-shell and off-shell production and decay rates to a pair of Z bosons, where one Z boson decays to an electron or muon pair, and the other to an electron, muon, or neutrino pair. The analysis is based on the data collected by the CMS experiment at the LHC in 2011 and 2012, corresponding to integrated luminosities of 5.1 fb-1 at a center-of-mass energy √{ s} = 7 TeV and 19.7 fb-1 at √{ s} = 8 TeV. A simultaneous maximum likelihood fit to the measured kinematic distributions near the resonance peak and above the Z-boson pair production threshold leads to an upper limit on the Higgs boson width of ΓH < 22 MeV at a 95% confidence level, which is 5.4 times the expected value in the standard model at the measured mass of mH = 125.6 GeV.

  1. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  2. Intimate partner violence among women in Spain: the impact of regional-level male unemployment and income inequality.

    PubMed

    Sanz-Barbero, Belén; Vives-Cases, Carmen; Otero-García, Laura; Muntaner, Carles; Torrubiano-Domínguez, Jordi; O'Campo, Patricia

    2015-12-01

    Intimate partner violence (IPV) against women is a complex worldwide public health problem. There is scarce research on the independent effect on IPV exerted by structural factors such as labour and economic policies, economic inequalities and gender inequality. To analyse the association, in Spain, between contextual variables of regional unemployment and income inequality and individual women's likelihood of IPV, independently of the women's characteristics. We conducted multilevel logistic regression to analyse cross-sectional data from the 2011 Spanish Macrosurvey of Gender-based Violence which included 7898 adult women. The first level of analyses was the individual women' characteristics and the second level was the region of residence. Of the survey participants, 12.2% reported lifetime IPV. The region of residence accounted for 3.5% of the total variability in IPV prevalence. We determined a direct association between regional male long-term unemployment and IPV likelihood (P = 0.007) and between the Gini Index for the regional income inequality and IPV likelihood (P < 0.001). Women residing in a region with higher gender-based income discrimination are at a lower likelihood of IPV than those residing in a region with low gender-based income discrimination (odds ratio = 0.64, 95% confidence intervals: 0.55-0.75). Growing regional unemployment rates and income inequalities increase women's likelihood of IPV. In times of economic downturn, like the current one in Spain, this association may translate into an increase in women's vulnerability to IPV. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  3. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images

    NASA Astrophysics Data System (ADS)

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-09-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.

  4. Using a probabilistic approach in an ecological risk assessment simulation tool: test case for depleted uranium (DU).

    PubMed

    Fan, Ming; Thongsri, Tepwitoon; Axe, Lisa; Tyson, Trevor A

    2005-06-01

    A probabilistic approach was applied in an ecological risk assessment (ERA) to characterize risk and address uncertainty employing Monte Carlo simulations for assessing parameter and risk probabilistic distributions. This simulation tool (ERA) includes a Window's based interface, an interactive and modifiable database management system (DBMS) that addresses a food web at trophic levels, and a comprehensive evaluation of exposure pathways. To illustrate this model, ecological risks from depleted uranium (DU) exposure at the US Army Yuma Proving Ground (YPG) and Aberdeen Proving Ground (APG) were assessed and characterized. Probabilistic distributions showed that at YPG, a reduction in plant root weight is considered likely to occur (98% likelihood) from exposure to DU; for most terrestrial animals, likelihood for adverse reproduction effects ranges from 0.1% to 44%. However, for the lesser long-nosed bat, the effects are expected to occur (>99% likelihood) through the reduction in size and weight of offspring. Based on available DU data for the firing range at APG, DU uptake will not likely affect survival of aquatic plants and animals (<0.1% likelihood). Based on field and laboratory studies conducted at APG and YPG on pocket mice, kangaroo rat, white-throated woodrat, deer, and milfoil, body burden concentrations observed fall into the distributions simulated at both sites.

  5. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    PubMed

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  7. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  8. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  9. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  10. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  11. Lung nodule malignancy prediction using multi-task convolutional neural network

    NASA Astrophysics Data System (ADS)

    Li, Xiuli; Kao, Yueying; Shen, Wei; Li, Xiang; Xie, Guotong

    2017-03-01

    In this paper, we investigated the problem of diagnostic lung nodule malignancy prediction using thoracic Computed Tomography (CT) screening. Unlike most existing studies classify the nodules into two types benign and malignancy, we interpreted the nodule malignancy prediction as a regression problem to predict continuous malignancy level. We proposed a joint multi-task learning algorithm using Convolutional Neural Network (CNN) to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. We trained a CNN regression model to predict the nodule malignancy, and designed a multi-task learning mechanism to simultaneously share knowledge among 9 different nodule characteristics (Subtlety, Calcification, Sphericity, Margin, Lobulation, Spiculation, Texture, Diameter and Malignancy), and improved the final prediction result. Each CNN would generate characteristic-specific feature representations, and then we applied multi-task learning on the features to predict the corresponding likelihood for that characteristic. We evaluated the proposed method on 2620 nodules CT scans from LIDC-IDRI dataset with the 5-fold cross validation strategy. The multitask CNN regression result for regression RMSE and mapped classification ACC were 0.830 and 83.03%, while the results for single task regression RMSE 0.894 and mapped classification ACC 74.9%. Experiments show that the proposed method could predict the lung nodule malignancy likelihood effectively and outperforms the state-of-the-art methods. The learning framework could easily be applied in other anomaly likelihood prediction problem, such as skin cancer and breast cancer. It demonstrated the possibility of our method facilitating the radiologists for nodule staging assessment and individual therapeutic planning.

  12. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    PubMed

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.

  13. Parameter-induced uncertainty quantification of a regional N2O and NO3 inventory using the biogeochemical model LandscapeDNDC

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus

    2014-05-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.

  14. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  15. Analysis of Sequence Data Under Multivariate Trait-Dependent Sampling.

    PubMed

    Tao, Ran; Zeng, Donglin; Franceschini, Nora; North, Kari E; Boerwinkle, Eric; Lin, Dan-Yu

    2015-06-01

    High-throughput DNA sequencing allows for the genotyping of common and rare variants for genetic association studies. At the present time and for the foreseeable future, it is not economically feasible to sequence all individuals in a large cohort. A cost-effective strategy is to sequence those individuals with extreme values of a quantitative trait. We consider the design under which the sampling depends on multiple quantitative traits. Under such trait-dependent sampling, standard linear regression analysis can result in bias of parameter estimation, inflation of type I error, and loss of power. We construct a likelihood function that properly reflects the sampling mechanism and utilizes all available data. We implement a computationally efficient EM algorithm and establish the theoretical properties of the resulting maximum likelihood estimators. Our methods can be used to perform separate inference on each trait or simultaneous inference on multiple traits. We pay special attention to gene-level association tests for rare variants. We demonstrate the superiority of the proposed methods over standard linear regression through extensive simulation studies. We provide applications to the Cohorts for Heart and Aging Research in Genomic Epidemiology Targeted Sequencing Study and the National Heart, Lung, and Blood Institute Exome Sequencing Project.

  16. Epidemiological Implications of Host Biodiversity and Vector Biology: Key Insights from Simple Models.

    PubMed

    Dobson, Andrew D M; Auld, Stuart K J R

    2016-04-01

    Models used to investigate the relationship between biodiversity change and vector-borne disease risk often do not explicitly include the vector; they instead rely on a frequency-dependent transmission function to represent vector dynamics. However, differences between classes of vector (e.g., ticks and insects) can cause discrepancies in epidemiological responses to environmental change. Using a pair of disease models (mosquito- and tick-borne), we simulated substitutive and additive biodiversity change (where noncompetent hosts replaced or were added to competent hosts, respectively), while considering different relationships between vector and host densities. We found important differences between classes of vector, including an increased likelihood of amplified disease risk under additive biodiversity change in mosquito models, driven by higher vector biting rates. We also draw attention to more general phenomena, such as a negative relationship between initial infection prevalence in vectors and likelihood of dilution, and the potential for a rise in density of infected vectors to occur simultaneously with a decline in proportion of infected hosts. This has important implications; the density of infected vectors is the most valid metric for primarily zoonotic infections, while the proportion of infected hosts is more relevant for infections where humans are a primary host.

  17. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates. © 2015 Blackwell Verlag GmbH.

  18. Coca chewing in prehistoric coastal Peru: dental evidence.

    PubMed

    Indriati, E; Buikstra, J E

    2001-03-01

    In this study, we describe the dental health of four prehistoric human populations from the southern coast of Peru, an area in which independent archaeological evidence suggests that the practice of coca-leaf chewing was relatively common. A repeated pattern of cervical-root caries accompanying root exposure was found on the buccal surfaces of the posterior dentition, coinciding with the typical placement of coca quids during mastication. To further examine the association between caries patterning and coca chewing, caries site characteristics of molar teeth were utilized as indicators for estimating the likelihood of coca chewing for adults within each of the study samples. Likelihood estimates were then compared with results of a test for coca use derived from hair samples from the same individuals. The hair and dental studies exhibited an 85.7% agreement. Thus, we have demonstrated the validity of a hard-tissue technique for identifying the presence of habitual coca-leaf chewing in ancient human remains, which is useful in archaeological contexts where hair is not preserved. These data can be used to explore the distribution of coca chewing in prehistoric times. Simultaneously, we document the dental health associated with this traditional Andean cultural practice. Copyright 2001 Wiley-Liss, Inc.

  19. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  20. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history.

    PubMed

    Lee, E Henry; Wickham, Charlotte; Beedlow, Peter A; Waschmann, Ronald S; Tingey, David T

    2017-10-01

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for climate and forest disturbances (i.e., pests, diseases, fire). The statistical method is illustrated with a tree-ring width time series for a mature closed-canopy Douglas-fir stand on the west slopes of the Cascade Mountains of Oregon, USA that is impacted by Swiss needle cast disease caused by the foliar fungus, Phaecryptopus gaeumannii (Rhode) Petrak. The likelihood-based TSIA method is proposed for the field of dendrochronology to understand the interaction of temperature, water, and forest disturbances that are important in forest ecology and climate change studies.

  1. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  3. Source and Message Factors in Persuasion: A Reply to Stiff's Critique of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Petty, Richard E.; And Others

    1987-01-01

    Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…

  4. Screening for postnatal depression in Chinese-speaking women using the Hong Kong translated version of the Edinburgh Postnatal Depression Scale.

    PubMed

    Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John

    2013-06-01

    The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  5. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  6. Estimating metallicities with isochrone fits to photometric data of open clusters

    NASA Astrophysics Data System (ADS)

    Monteiro, H.; Oliveira, A. F.; Dias, W. S.; Caetano, T. C.

    2014-10-01

    The metallicity is a critical parameter that affects the correct determination of stellar cluster's fundamental characteristics and has important implications in Galactic and Stellar evolution research. Fewer than 10% of the 2174 currently catalogued open clusters have their metallicity determined in the literature. In this work we present a method for estimating the metallicity of open clusters via non-subjective isochrone fitting using the cross-entropy global optimization algorithm applied to UBV photometric data. The free parameters distance, reddening, age, and metallicity are simultaneously determined by the fitting method. The fitting procedure uses weights for the observational data based on the estimation of membership likelihood for each star, which considers the observational magnitude limit, the density profile of stars as a function of radius from the center of the cluster, and the density of stars in multi-dimensional magnitude space. We present results of [Fe/H] for well-studied open clusters based on distinct UBV data sets. The [Fe/H] values obtained in the ten cases for which spectroscopic determinations were available in the literature agree, indicating that our method provides a good alternative to estimating [Fe/H] by using an objective isochrone fitting. Our results show that the typical precision is about 0.1 dex.

  7. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  8. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  9. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  10. Undernutrition among adults in India: the significance of individual-level and contextual factors impacting on the likelihood of underweight across sub-populations.

    PubMed

    Siddiqui, Md Zakaria; Donato, Ronald

    2017-01-01

    To investigate the extent to which individual-level as well as macro-level contextual factors influence the likelihood of underweight across adult sub-populations in India. Population-based cross-sectional survey included in India's National Health Family Survey conducted in 2005-06. We disaggregated into eight sub-populations. Multistage nationally representative household survey covering 99 % of India's population. The survey covered 124 385 females aged 15-49 years and 74 369 males aged 15-54 years. A social gradient in underweight exists in India. Even after allowing for wealth status, differences in the predicted probability of underweight persisted based upon rurality, age/maturity and gender. We found individual-level education lowered the likelihood of underweight for males, but no statistical association for females. Paradoxically, rural young (15-24 years) females from more educated villages had a higher likelihood of underweight relative to those in less educated villages; but for rural mature (>24 years) females the opposite was the case. Christians had a significantly lower likelihood of underweight relative to other socio-religious groups (OR=0·53-0·80). Higher state-level inequality increased the likelihood of underweight across most population groups, while neighbourhood inequality exhibited a similar relationship for the rural young population subgroups only. Individual states/neighbourhoods accounted for 5-9 % of the variation in the prediction of underweight. We found that rural young females represent a particularly highly vulnerable sub-population. Economic growth alone is unlikely to reduce the burden of malnutrition in India; accordingly, policy makers need to address the broader social determinants that contribute to higher underweight prevalence in specific demographic subgroups.

  11. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  12. Network versus portfolio structure in financial systems

    NASA Astrophysics Data System (ADS)

    Kobayashi, Teruyoshi

    2013-10-01

    The question of how to stabilize financial systems has attracted considerable attention since the global financial crisis of 2007-2009. Recently, Beale et al. [Proc. Natl. Acad. Sci. USA 108, 12647 (2011)] demonstrated that higher portfolio diversity among banks would reduce systemic risk by decreasing the risk of simultaneous defaults at the expense of a higher likelihood of individual defaults. In practice, however, a bank default has an externality in that it undermines other banks’ balance sheets. This paper explores how each of these different sources of risk, simultaneity risk and externality, contributes to systemic risk. The results show that the allocation of external assets that minimizes systemic risk varies with the topology of the financial network as long as asset returns have negative correlations. In the model, a well-known centrality measure, PageRank, reflects an appropriately defined “infectiveness” of a bank. An important result is that the most infective bank needs not always to be the safest bank. Under certain circumstances, the most infective node should act as a firewall to prevent large-scale collective defaults. The introduction of a counteractive portfolio structure will significantly reduce systemic risk.

  13. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    PubMed Central

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651

  14. The Hospital-Acquired Conditions (HAC) reduction program: using cranberry treatment to reduce catheter-associated urinary tract infections and avoid Medicare payment reduction penalties.

    PubMed

    Saitone, T L; Sexton, R J; Sexton Ward, A

    2018-01-01

    The Affordable Care Act (ACA) established the Hospital-Acquired Condition (HAC) Reduction Program. The Centers for Medicare and Medicaid Services (CMS) established a total HAC scoring methodology to rank hospitals based upon their HAC performance. Hospitals that rank in the lowest quartile based on their HAC score are subject to a 1% reduction in their total Medicare reimbursements. In FY 2017, 769 hospitals incurred payment reductions totaling $430 million. This study analyzes how improvements in the rate of catheter-associated urinary tract infections (CAUTI), based on the implementation of a cranberry-treatment regimen, impact hospitals' HAC scores and likelihood of avoiding the Medicare-reimbursement penalty. A simulation model is developed and implemented using public data from the CMS' Hospital Compare website to determine how hospitals' unilateral and simultaneous adoption of cranberry to improve CAUTI outcomes can affect HAC scores and the likelihood of a hospital incurring the Medicare payment reduction, given results on cranberry effectiveness in preventing CAUTI based on scientific trials. The simulation framework can be adapted to consider other initiatives to improve hospitals' HAC scores. Nearly all simulated hospitals improved their overall HAC score by adopting cranberry as a CAUTI preventative, assuming mean effectiveness from scientific trials. Many hospitals with HAC scores in the lowest quartile of the HAC-score distribution and subject to Medicare reimbursement reductions can improve their scores sufficiently through adopting a cranberry-treatment regimen to avoid payment reduction. The study was unable to replicate exactly the data used by CMS to establish HAC scores for FY 2018. The study assumes that hospitals subject to the Medicare payment reduction were not using cranberry as a prophylactic treatment for their catheterized patients, but is unable to confirm that this is true in all cases. The study also assumes that hospitalized catheter patients would be able to consume cranberry in either juice or capsule form, but this may not be true in 100% of cases. Most hospitals can improve their HAC scores and many can avoid Medicare reimbursement reductions if they are able to attain a percentage reduction in CAUTI comparable to that documented for cranberry-treatment regimes in the existing literature.

  15. Efficient computation of the phylogenetic likelihood function on multi-gene alignments and multi-core architectures.

    PubMed

    Stamatakis, Alexandros; Ott, Michael

    2008-12-27

    The continuous accumulation of sequence data, for example, due to novel wet-laboratory techniques such as pyrosequencing, coupled with the increasing popularity of multi-gene phylogenies and emerging multi-core processor architectures that face problems of cache congestion, poses new challenges with respect to the efficient computation of the phylogenetic maximum-likelihood (ML) function. Here, we propose two approaches that can significantly speed up likelihood computations that typically represent over 95 per cent of the computational effort conducted by current ML or Bayesian inference programs. Initially, we present a method and an appropriate data structure to efficiently compute the likelihood score on 'gappy' multi-gene alignments. By 'gappy' we denote sampling-induced gaps owing to missing sequences in individual genes (partitions), i.e. not real alignment gaps. A first proof-of-concept implementation in RAXML indicates that this approach can accelerate inferences on large and gappy alignments by approximately one order of magnitude. Moreover, we present insights and initial performance results on multi-core architectures obtained during the transition from an OpenMP-based to a Pthreads-based fine-grained parallelization of the ML function.

  16. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  17. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  18. Clinicians' practice environment is associated with a higher likelihood of recommending cesarean deliveries.

    PubMed

    Cheng, Yvonne W; Snowden, Jonathan M; Handler, Stephanie; Tager, Ira B; Hubbard, Alan; Caughey, Aaron B

    2014-08-01

    Little data exist regarding clinicians' role in the rising annual incidence rate of cesarean delivery in the US. We aimed to examine if clinicians' practice environment is associated with recommending cesarean deliveries. This is a survey study of clinicians who practice obstetrics in the US. This survey included eight clinical vignettes and 27 questions regarding clinicians' practice environment. Chi-square test and multivariable logistic regression were used for statistical comparison. Of 27 675 survey links sent, 3646 clinicians received and opened the survey electronically, and 1555 (43%) participated and 1486 (94%) completed the survey. Clinicians were categorized into three groups based on eight common obstetric vignettes as: more likely (n = 215), average likelihood (n = 1099), and less likely (n = 168) to recommend cesarean. Clinician environment factors associated with a higher likelihood of recommending cesarean included Laborists/Hospitalists practice model (p < 0.001), as-needed anesthesia support (p = 0.003), and rural/suburban practice setting (p < 0.001). We identified factors in clinicians' environment associated with their likelihood of recommending cesarean delivery. The decision to recommend cesarean delivery is a complicated one and is likely not solely based on patient factors.

  19. Maximal likelihood correspondence estimation for face recognition across pose.

    PubMed

    Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang

    2014-10-01

    Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.

  20. Pseudomonas aeruginosa dose response and bathing water infection.

    PubMed

    Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A

    2014-03-01

    Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.

  1. A quantum framework for likelihood ratios

    NASA Astrophysics Data System (ADS)

    Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.

    The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.

  2. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  3. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  4. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  5. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  6. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  7. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  8. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Beyond valence in the perception of likelihood: the role of emotion specificity.

    PubMed

    DeSteno, D; Petty, R E; Wegener, D T; Rucker, D D

    2000-03-01

    Positive and negative moods have been shown to increase likelihood estimates of future events matching these states in valence (e.g., E. J. Johnson & A. Tversky, 1983). In the present article, 4 studies provide evidence that this congruency bias (a) is not limited to valence but functions in an emotion-specific manner, (b) derives from the informational value of emotions, and (c) is not the inevitable outcome of likelihood assessment under heightened emotion. Specifically, Study 1 demonstrates that sadness and anger, 2 distinct, negative emotions, differentially bias likelihood estimates of sad and angering events. Studies 2 and 3 replicate this finding in addition to supporting an emotion-as-information (cf. N. Schwarz & G. L. Clore, 1983), as opposed to a memory-based, mediating process for the bias. Finally, Study 4 shows that when the source of the emotion is salient, a reversal of the bias can occur given greater cognitive effort aimed at accuracy.

  10. Predictors of Co-Occurring Risk Behavior Trajectories among Economically Disadvantaged African American Youth: Contextual and Individual Factors

    PubMed Central

    Sterrett, Emma M.; Dymnicki, Allison B.; Henry, David; Byck, Gayle; Bolland, John; Mustanski, Brian

    2014-01-01

    Purpose African American youth, particularly those from low-income backgrounds, evidence high rates of negative outcomes associated with three problem behaviors, conduct problems, risky sexual behavior, and substance use. This study used a contextually-tailored version of Problem Behavior Theory (PBT) to examine predictors of the simultaneous development of problem behaviors in this specific cultural group. Methods Socio-contextual and individual variables representing four PBT predictor categories, controls protection, support protection, models risk, and vulnerability risk, were examined as predictors of co-occurring problem behaviors among economically disadvantaged African American adolescents (n = 949). Specifically, the likelihood of following three classes of multiple problem behavior trajectories spanning ages 12 to 18, labeled the “early experimenters,” “increasing high risk-takers,” and “adolescent-limited” classes, as opposed to a “normative” class was examined. Results Among other findings, controls protection in the form of a more stringent household curfew at age 12 was related to a lower likelihood of being in the “early experimenters” and “increasing high risk-takers” classes. Conversely, vulnerability risk manifested as stronger attitudes of violence inevitability was associated with a higher likelihood of being in the “early experimenters” class. However, the PBT category of support protection was not associated with risk trajectory class. More distal neighborhood-level manifestations of PBT categories also did not predict co-occurring behavior problems. Conclusion Guided by an incorporation of contextually-salient processes into PBT, prevention programs aiming to decrease co-occurring problem behaviors among low-income African American adolescents would do well to target both proximal systems and psychological constructs related to perceived security throughout adolescence. PMID:24755141

  11. Predictors of co-occurring risk behavior trajectories among economically disadvantaged African-American youth: contextual and individual factors.

    PubMed

    Sterrett, Emma M; Dymnicki, Allison B; Henry, David; Byck, Gayle R; Bolland, John; Mustanski, Brian

    2014-09-01

    African-American youth, particularly those from low-income backgrounds, evidence high rates of negative outcomes associated with three problem behaviors, conduct problems, risky sexual behavior, and substance use. This study used a contextually tailored version of problem behavior theory (PBT) to examine predictors of the simultaneous development of problem behaviors in this specific cultural group. Sociocontextual and individual variables representing four PBT predictor categories, controls protection, support protection, models risk, and vulnerability risk, were examined as predictors of co-occurring problem behaviors among economically disadvantaged African-American adolescents (n = 949). Specifically, the likelihood of following three classes of multiple problem behavior trajectories spanning ages 12-18, labeled the "early experimenters," "increasing high risk-takers," and "adolescent-limited" classes, as opposed to a "normative" class, was examined. Among other findings, controls protection in the form of a more stringent household curfew at age 12 was related to a lower likelihood of being in the "early experimenters" and "increasing high risk-takers" classes. Conversely, vulnerability risk manifested as stronger attitudes of violence inevitability was associated with a higher likelihood of being in the "early experimenters" class. However, the PBT category of support protection was not associated with risk trajectory class. More distal neighborhood-level manifestations of PBT categories also did not predict co-occurring behavior problems. Guided by an incorporation of contextually salient processes into PBT, prevention programs aiming to decrease co-occurring problem behaviors among low-income African-American adolescents would do well to target both proximal systems and psychological constructs related to perceived security throughout adolescence. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  12. Pathogenic and Obesogenic Factors Associated with Inflammation in Chinese Children, Adolescents and Adults

    PubMed Central

    Thompson, Amanda L.; Houck, Kelly M.; Adair, Linda; Gordon-Larsen, Penny; Du, Shufa; Zhang, Bing; Popkin, Barry

    2014-01-01

    Objectives Influenced by pathogen exposure and obesity, inflammation provides a critical biological pathway linking changing environments to the development of cardiometabolic disease. This study tests the relative contribution of obesogenic and pathogenic factors to moderate and acute CRP elevations in Chinese children, adolescents and adults. Methods Data come from 8795 participants in the China Health and Nutrition Study. Age-stratified multinomial logistic models were used to test the association between illness history, pathogenic exposures, adiposity, health behaviors and moderate (1-10 mg/L in children and 3-10 mg/L in adults) and acute (>10mg/L) CRP elevations, controlling for age, sex and clustering by household. Backward model selection was used to assess which pathogenic and obesogenic predictors remained independently associated with moderate and acute CRP levels when accounting for simultaneous exposures. Results Overweight was the only significant independent risk factor for moderate inflammation in children (RRR 2.10, 95%CI 1.13-3.89). History of infectious (RRR 1.28, 95%CI 1.08-1.52) and non-communicable (RRR 1.37, 95%CI 1.12-1.69) disease, overweight (RRR 1.66, 95%CI 1.45-1.89) and high waist circumference (RRR 1.63, 95%CI 1.42-1.87) were independently associated with a greater likelihood of moderate inflammation in adults while history of infectious disease (RRR 1.87, 95%CI 1.35-2.56) and overweight (RRR 1.40, 95%CI 1.04-1.88) were independently associated with acute inflammation. Environmental pathogenicity was associated with a reduced likelihood of moderate inflammation, but a greater likelihood of acute inflammation in adults. Conclusions These results highlight the importance of both obesogenic and pathogenic factors in shaping inflammation risk in societies undergoing nutritional and epidemiological transitions. PMID:24123588

  13. Situational cues and momentary food environment predict everyday eating behavior in adults with overweight and obesity.

    PubMed

    Elliston, Katherine G; Ferguson, Stuart G; Schüz, Natalie; Schüz, Benjamin

    2017-04-01

    Individual eating behavior is a risk factor for obesity and highly dependent on internal and external cues. Many studies also suggest that the food environment (i.e., food outlets) influences eating behavior. This study therefore examines the momentary food environment (at the time of eating) and the role of cues simultaneously in predicting everyday eating behavior in adults with overweight and obesity. Intensive longitudinal study using ecological momentary assessment (EMA) over 14 days in 51 adults with overweight and obesity (average body mass index = 30.77; SD = 4.85) with a total of 745 participant days of data. Multiple daily assessments of eating (meals, high- or low-energy snacks) and randomly timed assessments. Cues and the momentary food environment were assessed during both assessment types. Random effects multinomial logistic regression shows that both internal (affect) and external (food availability, social situation, observing others eat) cues were associated with increased likelihood of eating. The momentary food environment predicted meals and snacking on top of cues, with a higher likelihood of high-energy snacks when fast food restaurants were close by (odds ratio [OR] = 1.89, 95% confidence interval [CI] = 1.22, 2.93) and a higher likelihood of low-energy snacks in proximity to supermarkets (OR = 2.29, 95% CI = 1.38, 3.82). Real-time eating behavior, both in terms of main meals and snacks, is associated with internal and external cues in adults with overweight and obesity. In addition, perceptions of the momentary food environment influence eating choices, emphasizing the importance of an integrated perspective on eating behavior and obesity prevention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Alternative stable states and phase shifts in coral reefs under anthropogenic stress.

    PubMed

    Fung, Tak; Seymour, Robert M; Johnson, Craig R

    2011-04-01

    Ecosystems with alternative stable states (ASS) may shift discontinuously from one stable state to another as environmental parameters cross a threshold. Reversal can then be difficult due to hysteresis effects. This contrasts with continuous state changes in response to changing environmental parameters, which are less difficult to reverse. Worldwide degradation of coral reefs, involving "phase shifts" from coral to algal dominance, highlights the pressing need to determine the likelihood of discontinuous phase shifts in coral reefs, in contrast to continuous shifts with no ASS. However, there is little evidence either for or against the existence of ASS for coral reefs. We use dynamic models to investigate the likelihood of continuous and discontinuous phase shifts in coral reefs subject to sustained environmental perturbation by fishing, nutrification, and sedimentation. Our modeling results suggest that coral reefs with or without anthropogenic stress can exhibit ASS, such that discontinuous phase shifts can occur. We also find evidence to support the view that high macroalgal growth rates and low grazing rates on macroalgae favor ASS in coral reefs. Further, our results suggest that the three stressors studied, either alone or in combination, can increase the likelihood of both continuous and discontinuous phase shifts by altering the competitive balance between corals and algae. However, in contrast to continuous phase shifts, we find that discontinuous shifts occur only in model coral reefs with parameter values near the extremes of their empirically determined ranges. This suggests that continuous shifts are more likely than discontinuous shifts in coral reefs. Our results also suggest that, for ecosystems in general, tackling multiple human stressors simultaneously maximizes resilience to phase shifts, ASS, and hysteresis, leading to improvements in ecosystem health and functioning.

  15. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  16. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    PubMed

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  17. An Elaboration Likelihood Model Based Longitudinal Analysis of Attitude Change during the Process of IT Acceptance via Education Program

    ERIC Educational Resources Information Center

    Lee, Woong-Kyu

    2012-01-01

    The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…

  18. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  19. Predictors of extra-marital partnerships among women married to fishermen along Lake Victoria in Kisumu County, Kenya.

    PubMed

    Kwena, Zachary; Mwanzo, Isaac; Shisanya, Chris; Camlin, Carol; Turan, Janet; Achiro, Lilian; Bukusi, Elizabeth

    2014-01-01

    The vulnerability of women to HIV infection makes establishing predictors of women's involvement in extra-marital partnerships critical. We investigated the predictors of extra-marital partnerships among women married to fishermen. The current analyses are part of a mixed methods cross-sectional survey of 1090 gender-matched interviews with 545 couples and 12 focus group discussions (FGDs) with 59 couples. Using a proportional to size simple random sample of fishermen as our index participants, we asked them to enrol in the study with their spouses. The consenting couples were interviewed simultaneously in separate private rooms. In addition to socio-economic and demographic data, we collected information on sexual behaviour including extra-marital sexual partnerships. We analysed these data using descriptive statistics and multivariate logistic regression. For FGDs, couples willing to participate were invited, consented and separated for simultaneous FGDs by gender-matched moderators. The resultant audiofiles were transcribed verbatim and translated into English for coding and thematic content analysis using NVivo 9. The prevalence of extra-marital partnerships among women was 6.2% within a reference time of six months. Factors that were independently associated with increased likelihood of extra-marital partnerships were domestic violence (aOR, 1.45; 95% CI 1.09-1.92), women reporting being denied a preferred sex position (aOR, 3.34; 95% CI 1.26-8.84) and spouse longer erect penis (aOR, 1.34; 95% CI 1.00-1.78). Conversely, women's age--more than 24 years (aOR, 0.33; 95% CI 0.14-0.78) and women's increased sexual satisfaction (aOR, 0.92; 95% CI 0.87-0.96) were associated with reduced likelihood of extra-marital partnerships. Domestic violence, denial of a preferred sex positions, longer erect penis, younger age and increased sexual satisfaction were the main predictors of women's involvement in extra-marital partnerships. Integration of sex education, counselling and life skills training in couple HIV prevention programs might help in risk reduction.

  20. Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain

    PubMed Central

    Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu

    2014-01-01

    In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486

  1. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  2. Synchrotron-based coherent scatter x-ray projection imaging using an array of monoenergetic pencil beams.

    PubMed

    Landheer, Karl; Johns, Paul C

    2012-09-01

    Traditional projection x-ray imaging utilizes only the information from the primary photons. Low-angle coherent scatter images can be acquired simultaneous to the primary images and provide additional information. In medical applications scatter imaging can improve x-ray contrast or reduce dose using information that is currently discarded in radiological images to augment the transmitted radiation information. Other applications include non-destructive testing and security. A system at the Canadian Light Source synchrotron was configured which utilizes multiple pencil beams (up to five) to create both primary and coherent scatter projection images, simultaneously. The sample was scanned through the beams using an automated step-and-shoot setup. Pixels were acquired in a hexagonal lattice to maximize packing efficiency. The typical pitch was between 1.0 and 1.6 mm. A Maximum Likelihood-Expectation Maximization-based iterative method was used to disentangle the overlapping information from the flat panel digital x-ray detector. The pixel value of the coherent scatter image was generated by integrating the radial profile (scatter intensity versus scattering angle) over an angular range. Different angular ranges maximize the contrast between different materials of interest. A five-beam primary and scatter image set (which had a pixel beam time of 990 ms and total scan time of 56 min) of a porcine phantom is included. For comparison a single-beam coherent scatter image of the same phantom is included. The muscle-fat contrast was 0.10 ± 0.01 and 1.16 ± 0.03 for the five-beam primary and scatter images, respectively. The air kerma was measured free in air using aluminum oxide optically stimulated luminescent dosimeters. The total area-averaged air kerma for the scan was measured to be 7.2 ± 0.4 cGy although due to difficulties in small-beam dosimetry this number could be inaccurate.

  3. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  4. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  5. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  6. A scan statistic for binary outcome based on hypergeometric probability model, with an application to detecting spatial clusters of Japanese encephalitis.

    PubMed

    Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong

    2013-01-01

    As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.

  7. Evidence-Based Occupational Hearing Screening I: Modeling the Effects of Real-World Noise Environments on the Likelihood of Effective Speech Communication.

    PubMed

    Soli, Sigfrid D; Giguère, Christian; Laroche, Chantal; Vaillancourt, Véronique; Dreschler, Wouter A; Rhebergen, Koenraad S; Harkins, Kevin; Ruckstuhl, Mark; Ramulu, Pradeep; Meyers, Lawrence S

    The objectives of this study were to (1) identify essential hearing-critical job tasks for public safety and law enforcement personnel; (2) determine the locations and real-world noise environments where these tasks are performed; (3) characterize each noise environment in terms of its impact on the likelihood of effective speech communication, considering the effects of different levels of vocal effort, communication distances, and repetition; and (4) use this characterization to define an objective normative reference for evaluating the ability of individuals to perform essential hearing-critical job tasks in noisy real-world environments. Data from five occupational hearing studies performed over a 17-year period for various public safety agencies were analyzed. In each study, job task analyses by job content experts identified essential hearing-critical tasks and the real-world noise environments where these tasks are performed. These environments were visited, and calibrated recordings of each noise environment were made. The extended speech intelligibility index (ESII) was calculated for each 4-sec interval in each recording. These data, together with the estimated ESII value required for effective speech communication by individuals with normal hearing, allowed the likelihood of effective speech communication in each noise environment for different levels of vocal effort and communication distances to be determined. These likelihoods provide an objective norm-referenced and standardized means of characterizing the predicted impact of real-world noise on the ability to perform essential hearing-critical tasks. A total of 16 noise environments for law enforcement personnel and eight noise environments for corrections personnel were analyzed. Effective speech communication was essential to hearing-critical tasks performed in these environments. Average noise levels, ranged from approximately 70 to 87 dBA in law enforcement environments and 64 to 80 dBA in corrections environments. The likelihood of effective speech communication at communication distances of 0.5 and 1 m was often less than 0.50 for normal vocal effort. Likelihood values often increased to 0.80 or more when raised or loud vocal effort was used. Effective speech communication at and beyond 5 m was often unlikely, regardless of vocal effort. ESII modeling of nonstationary real-world noise environments may prove an objective means of characterizing their impact on the likelihood of effective speech communication. The normative reference provided by these measures predicts the extent to which hearing impairments that increase the ESII value required for effective speech communication also decrease the likelihood of effective speech communication. These predictions may provide an objective evidence-based link between the essential hearing-critical job task requirements of public safety and law enforcement personnel and ESII-based hearing assessment of individuals who seek to perform these jobs.

  8. Applying the elaboration likelihood model of persuasion to a videotape-based eating disorders primary prevention program for adolescent girls.

    PubMed

    Withers, Giselle F; Wertheim, Eleanor H

    2004-01-01

    This study applied principles from the Elaboration Likelihood Model of Persuasion to the prevention of disordered eating. Early adolescent girls watched either a preventive videotape only (n=114) or video plus post-video activity (verbal discussion, written exercises, or control discussion) (n=187); or had no intervention (n=104). Significantly more body image and knowledge improvements occurred at post video and follow-up in the intervention groups compared to no intervention. There were no outcome differences among intervention groups, or between girls with high or low elaboration likelihood. Further research is needed in integrating the videotape into a broader prevention package.

  9. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.

    PubMed

    Karabatsos, George

    2018-06-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.

  10. cosmoabc: Likelihood-free inference for cosmology

    NASA Astrophysics Data System (ADS)

    Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.

    2015-05-01

    Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.

  11. Bayesian analysis of time-series data under case-crossover designs: posterior equivalence and inference.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay

    2013-12-01

    Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.

  12. Gender differences in the effect of visual sexual stimulation on the perceived covariation between freedom and responsibility.

    PubMed

    Nevala, James D; Gray, Nicholas J; McGahan, Joseph R; Minchew, Teresa

    2006-03-01

    The authors replicated and extended a test of Epstein's cognitive-experiential self-theory (CEST; S. Epstein, 1973, 1980, 1985, 1994, 2003) regarding subjective estimates of the relationship between freedom and responsibility. CEST predicts that information in the form of sexually provocative images is likely to be processed by the experiential system. The authors' hypothesis was that such experiential processing would cause an increase in the likelihood of participants endorsing as true a statement that proposed a negative correlation between freedom and responsibility. University students (N = 97) in introductory psychology classes viewed 25 images of either men or women in provocative clothing, or a control consisting of academic journal covers, after which they responded to 24 statements proposing either a positive, negative, or noncontingent relationship between freedom and responsibility. Judgments were analyzed according to perceiver gender and target gender, as well as the framing of the proposition and its contingency category. The hypothesis was supported for the men and to a lesser extent for the women. Although priming the experiential system by exposing participants to sexually provocative images did not change endorsement rates of positive contingencies, it did lead to an increase in the likelihood of simultaneously endorsing negative contingencies.

  13. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model.

    PubMed

    Jobst-Schwan, Tilman; Schmidt, Johanna Magdalena; Schneider, Ronen; Hoogstraten, Charlotte A; Ullmann, Jeremy F P; Schapiro, David; Majmundar, Amar J; Kolb, Amy; Eddy, Kaitlyn; Shril, Shirlee; Braun, Daniela A; Poduri, Annapurna; Hildebrandt, Friedhelm

    2018-01-01

    Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb) to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest.

  14. Application of maximum-likelihood estimation in optical coherence tomography for nanometer-class thickness estimation

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yuan, Qun; Tankam, Patrice; Clarkson, Eric; Kupinski, Matthew; Hindman, Holly B.; Aquavella, James V.; Rolland, Jannick P.

    2015-03-01

    In biophotonics imaging, one important and quantitative task is layer-thickness estimation. In this study, we investigate the approach of combining optical coherence tomography and a maximum-likelihood (ML) estimator for layer thickness estimation in the context of tear film imaging. The motivation of this study is to extend our understanding of tear film dynamics, which is the prerequisite to advance the management of Dry Eye Disease, through the simultaneous estimation of the thickness of the tear film lipid and aqueous layers. The estimator takes into account the different statistical processes associated with the imaging chain. We theoretically investigated the impact of key system parameters, such as the axial point spread functions (PSF) and various sources of noise on measurement uncertainty. Simulations show that an OCT system with a 1 μm axial PSF (FWHM) allows unbiased estimates down to nanometers with nanometer precision. In implementation, we built a customized Fourier domain OCT system that operates in the 600 to 1000 nm spectral window and achieves 0.93 micron axial PSF in corneal epithelium. We then validated the theoretical framework with physical phantoms made of custom optical coatings, with layer thicknesses from tens of nanometers to microns. Results demonstrate unbiased nanometer-class thickness estimates in three different physical phantoms.

  15. UCERF3: A new earthquake forecast for California's complex fault system

    USGS Publications Warehouse

    Field, Edward H.; ,

    2015-01-01

    With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.

  16. Asymptotic formulae for likelihood-based tests of new physics

    NASA Astrophysics Data System (ADS)

    Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer

    2011-02-01

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.

  17. Gyro-based Maximum-Likelihood Thruster Fault Detection and Identification

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Lages, Chris; Mah, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When building smaller, less expensive spacecraft, there is a need for intelligent fault tolerance vs. increased hardware redundancy. If fault tolerance can be achieved using existing navigation sensors, cost and vehicle complexity can be reduced. A maximum likelihood-based approach to thruster fault detection and identification (FDI) for spacecraft is developed here and applied in simulation to the X-38 space vehicle. The system uses only gyro signals to detect and identify hard, abrupt, single and multiple jet on- and off-failures. Faults are detected within one second and identified within one to five accords,

  18. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  19. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  20. The recognition and evaluation of homoplasy in primate and human evolution.

    PubMed

    Lockwood, C A; Fleagle, J G

    1999-01-01

    Homoplasy has been a prominent issue in primate systematics and phylogeny for as long as people have been studying human evolution. In the past, homoplasy, in the form of parallel evolution, was often considered the dominant theme in primate evolution. Today, it receives blame for difficulties in phylogenetic analysis, but is essential in the study of adaptation. This paper reviews the history of study of homoplasy, methods of defining homoplasy, and methodological and biological factors that generate homoplasy. A post hoc definition of homology and homoplasy, based on patterns of character distributions and their congruence or incongruence on a cladogram, is the most consistent method of recognizing these phenomena. Defined this way, homology and homoplasy are mutually exclusive. However, when different levels of analysis are examined, it is seen that homoplasy at one level, such as adult phenotype, often exists simultaneously with homology at a different level, such as developmental process. Thus, in some cases, patterns of homoplasy may point to underlying similarities that reflect the shared heritage of a particular clade. This is an old concept that is being renewed on the strength of recent trends in developmental biology. Factors that influence homoplasy include character definition and a host of biological factors, such as developmental constraints, allometry, and adaptation. These interact with one another to provide explanations of homoplastic patterns. Because of the repetition of events, explanations of homoplastic features are often more reliable than those for homologous features, and serve as effective tests for hypotheses of evolutionary process. In some cases, particular explanations of homoplasy lead to generalizations about the likelihood of homoplasy in a type of structure. The structure may be adaptive or highly epigenetic, or it may belong to an anatomical system considered to be more prone to homoplasy than others. However, our review shows that these generalizations are usually based on theory, and contradictory expectations can be developed under different theoretical models. More rigorous empirical studies are necessary to discover what, if any, generalizations can be made about the likelihood of homoplasy in different types of characters.

  1. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  2. Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models

    PubMed Central

    Hillis, Stephen L.

    2015-01-01

    A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405

  3. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  4. Modeling abundance effects in distance sampling

    USGS Publications Warehouse

    Royle, J. Andrew; Dawson, D.K.; Bates, S.

    2004-01-01

    Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

  5. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  6. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  7. Treatment Recommendations for Single-Unit Crowns: Findings from The National Dental Practice-Based Research Network

    PubMed Central

    McCracken, Michael S.; Louis, David R.; Litaker, Mark S.; Minyé, Helena M.; Mungia, Rahma; Gordan, Valeria V.; Marshall, Don G.; Gilbert, Gregg H.

    2016-01-01

    Background Objectives were to: (1) quantify practitioner variation in likelihood to recommend a crown; and (2) test whether certain dentist, practice, and clinical factors are significantly associated with this likelihood. Methods Dentists in the National Dental Practice-Based Research Network completed a questionnaire about indications for single-unit crowns. In four clinical scenarios, practitioners ranked their likelihood of recommending a single-unit crown. These responses were used to calculate a dentist-specific “Crown Factor” (CF; range 0–12). A higher score implies a higher likelihood to recommend a crown. Certain characteristics were tested for statistically significant associations with the CF. Results 1,777 of 2,132 eligible dentists responded (83%). Practitioners were most likely to recommend crowns for teeth that were fractured, cracked, endodontically-treated, or had a broken restoration. Practitioners overwhelmingly recommended crowns for posterior teeth treated endodontically (94%). Practice owners, Southwest practitioners, and practitioners with a balanced work load were more likely to recommend crowns, as were practitioners who use optical scanners for digital impressions. Conclusions There is substantial variation in the likelihood of recommending a crown. While consensus exists in some areas (posterior endodontic treatment), variation dominates in others (size of an existing restoration). Recommendations varied by type of practice, network region, practice busyness, patient insurance status, and use of optical scanners. Practical Implications Recommendations for crowns may be influenced by factors unrelated to tooth and patient variables. A concern for tooth fracture -- whether from endodontic treatment, fractured teeth, or large restorations -- prompted many clinicians to recommend crowns. PMID:27492046

  8. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  9. Detecting REM sleep from the finger: an automatic REM sleep algorithm based on peripheral arterial tone (PAT) and actigraphy.

    PubMed

    Herscovici, Sarah; Pe'er, Avivit; Papyan, Surik; Lavie, Peretz

    2007-02-01

    Scoring of REM sleep based on polysomnographic recordings is a laborious and time-consuming process. The growing number of ambulatory devices designed for cost-effective home-based diagnostic sleep recordings necessitates the development of a reliable automatic REM sleep detection algorithm that is not based on the traditional electroencephalographic, electrooccolographic and electromyographic recordings trio. This paper presents an automatic REM detection algorithm based on the peripheral arterial tone (PAT) signal and actigraphy which are recorded with an ambulatory wrist-worn device (Watch-PAT100). The PAT signal is a measure of the pulsatile volume changes at the finger tip reflecting sympathetic tone variations. The algorithm was developed using a training set of 30 patients recorded simultaneously with polysomnography and Watch-PAT100. Sleep records were divided into 5 min intervals and two time series were constructed from the PAT amplitudes and PAT-derived inter-pulse periods in each interval. A prediction function based on 16 features extracted from the above time series that determines the likelihood of detecting a REM epoch was developed. The coefficients of the prediction function were determined using a genetic algorithm (GA) optimizing process tuned to maximize a price function depending on the sensitivity, specificity and agreement of the algorithm in comparison with the gold standard of polysomnographic manual scoring. Based on a separate validation set of 30 patients overall sensitivity, specificity and agreement of the automatic algorithm to identify standard 30 s epochs of REM sleep were 78%, 92%, 89%, respectively. Deploying this REM detection algorithm in a wrist worn device could be very useful for unattended ambulatory sleep monitoring. The innovative method of optimization using a genetic algorithm has been proven to yield robust results in the validation set.

  10. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  11. Automatic Spike Sorting Using Tuning Information

    PubMed Central

    Ventura, Valérie

    2011-01-01

    Current spike sorting methods focus on clustering neurons’ characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes’ identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. PMID:19548802

  12. Automatic spike sorting using tuning information.

    PubMed

    Ventura, Valérie

    2009-09-01

    Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.

  13. Bayesian multivariate Poisson abundance models for T-cell receptor data.

    PubMed

    Greene, Joshua; Birtwistle, Marc R; Ignatowicz, Leszek; Rempala, Grzegorz A

    2013-06-07

    A major feature of an adaptive immune system is its ability to generate B- and T-cell clones capable of recognizing and neutralizing specific antigens. These clones recognize antigens with the help of the surface molecules, called antigen receptors, acquired individually during the clonal development process. In order to ensure a response to a broad range of antigens, the number of different receptor molecules is extremely large, resulting in a huge clonal diversity of both B- and T-cell receptor populations and making their experimental comparisons statistically challenging. To facilitate such comparisons, we propose a flexible parametric model of multivariate count data and illustrate its use in a simultaneous analysis of multiple antigen receptor populations derived from mammalian T-cells. The model relies on a representation of the observed receptor counts as a multivariate Poisson abundance mixture (m PAM). A Bayesian parameter fitting procedure is proposed, based on the complete posterior likelihood, rather than the conditional one used typically in similar settings. The new procedure is shown to be considerably more efficient than its conditional counterpart (as measured by the Fisher information) in the regions of m PAM parameter space relevant to model T-cell data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)

    PubMed Central

    Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik

    2013-01-01

    We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172

  15. Detecting Parkinson's disease from sustained phonation and speech signals.

    PubMed

    Vaiciukynas, Evaldas; Verikas, Antanas; Gelzinis, Adas; Bacauskiene, Marija

    2017-01-01

    This study investigates signals from sustained phonation and text-dependent speech modalities for Parkinson's disease screening. Phonation corresponds to the vowel /a/ voicing task and speech to the pronunciation of a short sentence in Lithuanian language. Signals were recorded through two channels simultaneously, namely, acoustic cardioid (AC) and smart phone (SP) microphones. Additional modalities were obtained by splitting speech recording into voiced and unvoiced parts. Information in each modality is summarized by 18 well-known audio feature sets. Random forest (RF) is used as a machine learning algorithm, both for individual feature sets and for decision-level fusion. Detection performance is measured by the out-of-bag equal error rate (EER) and the cost of log-likelihood-ratio. Essentia audio feature set was the best using the AC speech modality and YAAFE audio feature set was the best using the SP unvoiced modality, achieving EER of 20.30% and 25.57%, respectively. Fusion of all feature sets and modalities resulted in EER of 19.27% for the AC and 23.00% for the SP channel. Non-linear projection of a RF-based proximity matrix into the 2D space enriched medical decision support by visualization.

  16. Privatization and the allure of franchising: a Zambian feasibility study.

    PubMed

    Fiedler, John L; Wight, Jonathan B

    2003-01-01

    Efforts to privatize portions of the health sector have proven more difficult to implement than had been anticipated previously. One common bottleneck encountered has been the traditional organizational structure of the private sector, with its plethora of independent, single physician practices. The atomistic nature of the sector has rendered many privatization efforts difficult, slow and costly-in terms of both organizational development and administration. In many parts of Africa, in particular, the shortages of human and social capital, and the fragile nature of legal institutions, undermine the appeal of privatization. The private sector is left with inefficiencies, high prices and costs, and a reduced effective demand. The result is the simultaneous existence of excess capacity and unmet need. One potential method to improve the efficiency of the private sector, and thereby enhance the likelihood of successful privatization, is to transfer managerial technology--via franchising--from models that have proven successful elsewhere. This paper presents a feasibility analysis of franchizing the successful Bolivian PROSALUD system's management package to Zambia. The assessment, based on PROSALUD's financial model, demonstrates that technology transfer requires careful adaptation to local conditions and, in this instance, would still require significant external assistance.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.

    We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of themore » plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.« less

  18. Co-enrollment in multiple HIV prevention trials - experiences from the CAPRISA 004 Tenofovir gel trial.

    PubMed

    Karim, Quarraisha Abdool; Kharsany, Ayesha B M; Naidoo, Kasavan; Yende, Nonhlanhla; Gengiah, Tanuja; Omar, Zaheen; Arulappan, Natasha; Mlisana, Koleka P; Luthuli, Londiwe R; Karim, Salim S Abdool

    2011-05-01

    In settings where multiple HIV prevention trials are conducted in close proximity, trial participants may attempt to enroll in more than one trial simultaneously. Co-enrollment impacts on participant's safety and validity of trial results. We describe our experience, remedial action taken, inter-organizational collaboration and lessons learnt following the identification of co-enrolled participants. Between February and April 2008, we identified 185 of the 398 enrolled participants as ineligible. In violation of the study protocol exclusion criteria, there was simultaneous enrollment in another HIV prevention trial (ineligible co-enrolled, n=135), and enrollment of women who had participated in a microbicide trial within the past 12 months (ineligible not co-enrolled, n=50). Following a complete audit of all enrolled participants, ineligible participants were discontinued via study exit visits from trial follow-up. Custom-designed education program on co-enrollment impacting on participants' safety and validity of the trial results was implemented. Shared electronic database between research units was established to enable verification of each volunteer's trial participation and to prevent future co-enrollments. Interviews with ineligible enrolled women revealed that high-quality care, financial incentives, altruistic motives, preference for sex with gel, wanting to increase their likelihood of receiving active gel, perceived low risk of discovery and peer pressure are the reasons for their enrollment in the CAPRISA 004 trial. Instituting education programs based on the reasons reported by women for seeking enrollment in more than one trial and using a shared central database system to identify co-enrollments have effectively prevented further co-enrollments. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. A partial differential equation-based general framework adapted to Rayleigh's, Rician's and Gaussian's distributed noise for restoration and enhancement of magnetic resonance image.

    PubMed

    Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev

    2016-01-01

    The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.

  20. Predicting Likelihood of Surgery Prior to First Visit in Patients with Back and Lower Extremity Symptoms: A simple mathematical model based on over 8000 patients.

    PubMed

    Boden, Lauren M; Boden, Stephanie A; Premkumar, Ajay; Gottschalk, Michael B; Boden, Scott D

    2018-02-09

    Retrospective analysis of prospectively collected data. To create a data-driven triage system stratifying patients by likelihood of undergoing spinal surgery within one year of presentation. Low back pain (LBP) and radicular lower extremity (LE) symptoms are common musculoskeletal problems. There is currently no standard data-derived triage process based on information that can be obtained prior to the initial physician-patient encounter to direct patients to the optimal physician type. We analyzed patient-reported data from 8006 patients with a chief complaint of LBP and/or LE radicular symptoms who presented to surgeons at a large multidisciplinary spine center between September 1, 2005 and June 30, 2016. Univariate and multivariate analysis identified independent risk factors for undergoing spinal surgery within one year of initial visit. A model incorporating these risk factors was created using a random sample of 80% of the total patients in our cohort, and validated on the remaining 20%. The baseline one-year surgery rate within our cohort was 39% for all patients and 42% for patients with LE symptoms. Those identified as high likelihood by the center's existing triage process had a surgery rate of 45%. The new triage scoring system proposed in this study was able to identify a high likelihood group in which 58% underwent surgery, which is a 46% higher surgery rate than in non-triaged patients and a 29% improvement from our institution's existing triage system. The data-driven triage model and scoring system derived and validated in this study (Spine Surgery Likelihood model [SSL-11]), significantly improved existing processes in predicting the likelihood of undergoing spinal surgery within one year of initial presentation. This triage system will allow centers to more selectively screen for surgical candidates and more effectively direct patients to surgeons or non-operative spine specialists. 4.

  1. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  2. [INVITED] Luminescent QR codes for smart labelling and sensing

    NASA Astrophysics Data System (ADS)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  3. The effects of socioeconomic status and indices of physical environment on reduced birth weight and preterm births in Eastern Massachusetts

    PubMed Central

    Zeka, Ariana; Melly, Steve J; Schwartz, Joel

    2008-01-01

    Background Air pollution and social characteristics have been shown to affect indicators of health. While use of spatial methods to estimate exposure to air pollution has increased the power to detect effects, questions have been raised about potential for confounding by social factors. Methods A study of singleton births in Eastern Massachusetts was conducted between 1996 and 2002 to examine the association between indicators of traffic, land use, individual and area-based socioeconomic measures (SEM), and birth outcomes (birth weight, small for gestational age and preterm births), in a two-level hierarchical model. Results We found effects of both individual (education, race, prenatal care index) and area-based (median household income) SEM with all birth outcomes. The associations for traffic and land use variables were mainly seen with birth weight, with an exception for an effect of cumulative traffic density on small for gestational age. Race/ethnicity of mother was an important predictor of birth outcomes and a strong confounder for both area-based SEM and indices of physical environment. The effects of traffic and land use differed by level of education and median household income. Conclusion Overall, the findings of the study suggested greater likelihood of reduced birth weight and preterm births among the more socially disadvantaged, and a greater risk of reduced birth weight associated with traffic exposures. Results revealed the importance of controlling simultaneously for SEM and environmental exposures as the way to better understand determinants of health. PMID:19032747

  4. Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.

    PubMed

    Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack

    2017-06-01

    In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.

  5. Basal jawed vertebrate phylogeny inferred from multiple nuclear DNA-coded genes

    PubMed Central

    Kikugawa, Kanae; Katoh, Kazutaka; Kuraku, Shigehiro; Sakurai, Hiroshi; Ishida, Osamu; Iwabe, Naoyuki; Miyata, Takashi

    2004-01-01

    Background Phylogenetic analyses of jawed vertebrates based on mitochondrial sequences often result in confusing inferences which are obviously inconsistent with generally accepted trees. In particular, in a hypothesis by Rasmussen and Arnason based on mitochondrial trees, cartilaginous fishes have a terminal position in a paraphyletic cluster of bony fishes. No previous analysis based on nuclear DNA-coded genes could significantly reject the mitochondrial trees of jawed vertebrates. Results We have cloned and sequenced seven nuclear DNA-coded genes from 13 vertebrate species. These sequences, together with sequences available from databases including 13 jawed vertebrates from eight major groups (cartilaginous fishes, bichir, chondrosteans, gar, bowfin, teleost fishes, lungfishes and tetrapods) and an outgroup (a cyclostome and a lancelet), have been subjected to phylogenetic analyses based on the maximum likelihood method. Conclusion Cartilaginous fishes have been inferred to be basal to other jawed vertebrates, which is consistent with the generally accepted view. The minimum log-likelihood difference between the maximum likelihood tree and trees not supporting the basal position of cartilaginous fishes is 18.3 ± 13.1. The hypothesis by Rasmussen and Arnason has been significantly rejected with the minimum log-likelihood difference of 123 ± 23.3. Our tree has also shown that living holosteans, comprising bowfin and gar, form a monophyletic group which is the sister group to teleost fishes. This is consistent with a formerly prevalent view of vertebrate classification, although inconsistent with both of the current morphology-based and mitochondrial sequence-based trees. Furthermore, the bichir has been shown to be the basal ray-finned fish. Tetrapods and lungfish have formed a monophyletic cluster in the tree inferred from the concatenated alignment, being consistent with the currently prevalent view. It also remains possible that tetrapods are more closely related to ray-finned fishes than to lungfishes. PMID:15070407

  6. Use of Bayesian Networks to Probabilistically Model and Improve the Likelihood of Validation of Microarray Findings by RT-PCR

    PubMed Central

    English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.

    2014-01-01

    Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084

  7. Efficient Exploration of the Space of Reconciled Gene Trees

    PubMed Central

    Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent

    2013-01-01

    Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510

  8. Obstetrician gender and the likelihood of performing a maternal request for a cesarean delivery.

    PubMed

    Liu, Tsai-Ching; Lin, Herng-Ching; Chen, Chin-Shyan; Lee, Hsin-Chien

    2008-01-01

    To examine the relationship between obstetrician gender and the likelihood of maternal request for cesarean section (CS) within different healthcare institutions (medical centers, regional hospitals, district hospitals, and obstetric and gynecology clinics). Five years of population-based data from Taiwan covering 857,920 singleton deliveries without a clinical indication for a CS were subjected to a multiple logistic regression to examine the association between obstetrician gender and the likelihood of maternal request for a CS. After adjusting for physician and institutional characteristics, it was found that male obstetricians were more likely to perform a requested CS than female obstetricians in district hospitals (OR=1.53) and clinics (OR=2.26), while obstetrician gender had no discernible associations with the likelihood of a CS upon maternal request in medical centers and regional hospitals. While obstetrician gender had the greatest association with delivery mode decisions in the lowest obstetric care units, those associations were diluted in higher-level healthcare institutions.

  9. Discriminative functions and over-training as class-enhancing determinants of meaningful stimuli.

    PubMed

    Travis, Robert W; Fields, Lanny; Arntzen, Erik

    2014-07-01

    Likelihood of equivalence class formation (yield) was influenced by pre-class formation of simultaneous and successive discriminations, their mastery criteria, and overtraining of the successive discriminations. Each undergraduate in seven groups attempted to form two 3-node, 5-member equivalence classes (ABCDE). In the pictorial (PIC) group, meaningless nonsense syllables were used as the A, B, D, and E stimuli and meaningful pictures as the C stimuli. Nonsense syllables only were used in the other groups. The abstract (ABS) or 0-0-0 group involved no pre-class training. In the 84-0-0, 84-5-0 and 84-20-0 groups, simultaneous discriminations were trained among C stimuli to a mastery criterion of 84 trials, followed by successive discriminations trained to mastery criteria of 0, 5, and 20 trials, respectively. In the 84-20-0, 84-20-100, and 84-20-500 groups, simultaneous and successive discriminations were trained as noted, followed by overtraining with 0, 100, 500 successive-discrimination trials, respectively. The ABS group produced a 6% yield with the 84-0-0, 84-5-0, and 84-20-0 groups producing further modest increments. Overtraining produced a linear increase in yield, reaching 85% after 500 overtraining trials, a yield matching that produced by classes containing pictures as C stimuli (PIC). Thus, acquired discriminative functions and the overtraining of at least one function can account for class enhancement by meaningful stimuli. © Society for the Experimental Analysis of Behavior.

  10. A unified statistical approach to non-negative matrix factorization and probabilistic latent semantic indexing

    PubMed Central

    Wang, Guoli; Ebrahimi, Nader

    2014-01-01

    Non-negative matrix factorization (NMF) is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into the product of two nonnegative matrices, W and H, such that V ∼ W H. It has been shown to have a parts-based, sparse representation of the data. NMF has been successfully applied in a variety of areas such as natural language processing, neuroscience, information retrieval, image processing, speech recognition and computational biology for the analysis and interpretation of large-scale data. There has also been simultaneous development of a related statistical latent class modeling approach, namely, probabilistic latent semantic indexing (PLSI), for analyzing and interpreting co-occurrence count data arising in natural language processing. In this paper, we present a generalized statistical approach to NMF and PLSI based on Renyi's divergence between two non-negative matrices, stemming from the Poisson likelihood. Our approach unifies various competing models and provides a unique theoretical framework for these methods. We propose a unified algorithm for NMF and provide a rigorous proof of monotonicity of multiplicative updates for W and H. In addition, we generalize the relationship between NMF and PLSI within this framework. We demonstrate the applicability and utility of our approach as well as its superior performance relative to existing methods using real-life and simulated document clustering data. PMID:25821345

  11. A unified statistical approach to non-negative matrix factorization and probabilistic latent semantic indexing.

    PubMed

    Devarajan, Karthik; Wang, Guoli; Ebrahimi, Nader

    2015-04-01

    Non-negative matrix factorization (NMF) is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into the product of two nonnegative matrices, W and H , such that V ∼ W H . It has been shown to have a parts-based, sparse representation of the data. NMF has been successfully applied in a variety of areas such as natural language processing, neuroscience, information retrieval, image processing, speech recognition and computational biology for the analysis and interpretation of large-scale data. There has also been simultaneous development of a related statistical latent class modeling approach, namely, probabilistic latent semantic indexing (PLSI), for analyzing and interpreting co-occurrence count data arising in natural language processing. In this paper, we present a generalized statistical approach to NMF and PLSI based on Renyi's divergence between two non-negative matrices, stemming from the Poisson likelihood. Our approach unifies various competing models and provides a unique theoretical framework for these methods. We propose a unified algorithm for NMF and provide a rigorous proof of monotonicity of multiplicative updates for W and H . In addition, we generalize the relationship between NMF and PLSI within this framework. We demonstrate the applicability and utility of our approach as well as its superior performance relative to existing methods using real-life and simulated document clustering data.

  12. Orbitofrontal cortical activity during repeated free choice.

    PubMed

    Campos, Michael; Koppitch, Kari; Andersen, Richard A; Shimojo, Shinsuke

    2012-06-01

    Neurons in the orbitofrontal cortex (OFC) have been shown to encode subjective values, suggesting a role in preference-based decision-making, although the precise relation to choice behavior is unclear. In a repeated two-choice task, subjective values of each choice can account for aggregate choice behavior, which is the overall likelihood of choosing one option over the other. Individual choices, however, are impossible to predict with knowledge of relative subjective values alone. In this study we investigated the role of internal factors in choice behavior with a simple but novel free-choice task and simultaneous recording from individual neurons in nonhuman primate OFC. We found that, first, the observed sequences of choice behavior included periods of exceptionally long runs of each of two available options and periods of frequent switching. Neither a satiety-based mechanism nor a random selection process could explain the observed choice behavior. Second, OFC neurons encode important features of the choice behavior. These features include activity selective for exceptionally long runs of a given choice (stay selectivity) as well as activity selective for switches between choices (switch selectivity). These results suggest that OFC neural activity, in addition to encoding subjective values on a long timescale that is sensitive to satiety, also encodes a signal that fluctuates on a shorter timescale and thereby reflects some of the statistically improbable aspects of free-choice behavior.

  13. Orbitofrontal cortical activity during repeated free choice

    PubMed Central

    Koppitch, Kari; Andersen, Richard A.; Shimojo, Shinsuke

    2012-01-01

    Neurons in the orbitofrontal cortex (OFC) have been shown to encode subjective values, suggesting a role in preference-based decision-making, although the precise relation to choice behavior is unclear. In a repeated two-choice task, subjective values of each choice can account for aggregate choice behavior, which is the overall likelihood of choosing one option over the other. Individual choices, however, are impossible to predict with knowledge of relative subjective values alone. In this study we investigated the role of internal factors in choice behavior with a simple but novel free-choice task and simultaneous recording from individual neurons in nonhuman primate OFC. We found that, first, the observed sequences of choice behavior included periods of exceptionally long runs of each of two available options and periods of frequent switching. Neither a satiety-based mechanism nor a random selection process could explain the observed choice behavior. Second, OFC neurons encode important features of the choice behavior. These features include activity selective for exceptionally long runs of a given choice (stay selectivity) as well as activity selective for switches between choices (switch selectivity). These results suggest that OFC neural activity, in addition to encoding subjective values on a long timescale that is sensitive to satiety, also encodes a signal that fluctuates on a shorter timescale and thereby reflects some of the statistically improbable aspects of free-choice behavior. PMID:22423007

  14. Landslide Failure Likelihoods Estimated Through Analysis of Suspended Sediment and Streamflow Time Series Data

    NASA Astrophysics Data System (ADS)

    Stark, C. P.; Rudd, S.; Lall, U.; Hovius, N.; Dadson, S.; Chen, M.-C.

    Off-Axis DOAS measurements with non-artificial scattered light, based upon the renowned DOAS technique, allow to optimize the sensitivity of the technique for the trace gas profile in question by strongly increasing the light's path through the relevant atmosphere layers. Multi-Axis-(MAX) DOAS probe several directions simultaneously or sequentially to increase the spatial resolution. Several devices (ground based, air- borne and ship-built) are operated by our group in the framework of the SCIAMACHY validation. Radiative transfer models are an essential requirement for the interpretation of these measurements and their conversion into detailed profile data. Apart from some existing Monte Carlo Models most codes use analytical algorithms to solve the radia- tive transfer equation for given atmospheric conditions. For specific circumstances, e.g. photon scattering within clouds, these approaches are not efficient enough to pro- vide sufficient accuracy. Also horizontal gradients in atmospheric parameters have to be taken into account. To meet the needs of measurement situations for all kinds of scattered light DOAS platforms, a three dimensional full spherical Monte Carlo model was devised. Here we present Air Mass Factors (AMF) to calculate vertical column densities (VCD) from measured slant column densities (SCD). Sensitivity studies on the influence of the wavelength and telescope direction used, of the altitude of profile layers, albedo, refraction and basic aerosols are shown. Also modelled intensity series are compared with radiometer data.

  15. Adaptive control of center of mass (global) motion and its joint (local) origin in gait.

    PubMed

    Yang, Feng; Pai, Yi-Chung

    2014-08-22

    Dynamic gait stability can be quantified by the relationship of the motion state (i.e. the position and velocity) between the body center of mass (COM) and its base of support (BOS). Humans learn how to adaptively control stability by regulating the absolute COM motion state (i.e. its position and velocity) and/or by controlling the BOS (through stepping) in a predictable manner, or by doing both simultaneously following an external perturbation that disrupts their regular relationship. Post repeated-slip perturbation training, for instance, older adults learned to forward shift their COM position while walking with a reduced step length, hence reduced their likelihood of slip-induced falls. How and to what extent each individual joint influences such adaptive alterations is mostly unknown. A three-dimensional individualized human kinematic model was established. Based on the human model, sensitivity analysis was used to systematically quantify the influence of each lower limb joint on the COM position relative to the BOS and the step length during gait. It was found that the leading foot had the greatest effect on regulating the COM position relative to the BOS; and both hips bear the most influence on the step length. These findings could guide cost-effective but efficient fall-reduction training paradigm among older population. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  17. Appendix 2: Risk-based framework and risk case studies. Risk assessment for forested habitats in northern Wisconsin.

    Treesearch

    Louis R. Iverson; Stephen N. Matthews; Anantha M. Prasad; Matthew P. Peters; Gary W. Yohe

    2012-01-01

    We used a risk matrix to assess risk from climate change for multiple forest species by discussing an example that depicts a range of risk for three tree species of northern Wisconsin. Risk is defined here as the product of the likelihood of an event occurring and the consequences or effects of that event. In the context of species habitats, likelihood is related to...

  18. Effects of Two Versions of an Empathy-Based Rape Prevention Program on Fraternity Men's Survivor Empathy, Attitudes, and Behavioral Intent to Commit Rape or Sexual Assault

    ERIC Educational Resources Information Center

    Foubert, John D.; Newberry, Johnathan T.

    2006-01-01

    Fraternity men (N = 261) at a small to midsized public university saw one of two versions of a rape prevention program or were in a control group. Program participants reported significant increases in empathy toward rape survivors and significant declines in rape myth acceptance, likelihood of raping, and likelihood of committing sexual assault.…

  19. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  20. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  1. Mapping Quantitative Trait Loci in Crosses between Outbred Lines Using Least Squares

    PubMed Central

    Haley, C. S.; Knott, S. A.; Elsen, J. M.

    1994-01-01

    The use of genetic maps based upon molecular markers has allowed the dissection of some of the factors underlying quantitative variation in crosses between inbred lines. For many species crossing inbred lines is not a practical proposition, although crosses between genetically very different outbred lines are possible. Here we develop a least squares method for the analysis of crosses between outbred lines which simultaneously uses information from multiple linked markers. The method is suitable for crosses where the lines may be segregating at marker loci but can be assumed to be fixed for alternative alleles at the major quantitative trait loci (QTLs) affecting the traits under analysis (e.g., crosses between divergent selection lines or breeds with different selection histories). The simultaneous use of multiple markers from a linkage group increases the sensitivity of the test statistic, and thus the power for the detection of QTLs, compared to the use of single markers or markers flanking an interval. The gain is greater for more closely spaced markers and for markers of lower information content. Use of multiple markers can also remove the bias in the estimated position and effect of a QTL which may result when different markers in a linkage group vary in their heterozygosity in the F(1) (and thus in their information content) and are considered only singly or a pair at a time. The method is relatively simple to apply so that more complex models can be fitted than is currently possible by maximum likelihood. Thus fixed effects and effects of background genotype can be fitted simultaneously with the exploration of a single linkage group which will increase the power to detect QTLs by reducing the residual variance. More complex models with several QTLs in the same linkage group and two-locus interactions between QTLs can similarly be examined. Thus least squares provides a powerful tool to extend the range of crosses from which QTLs can be dissected whilst at the same time allowing flexible and realistic models to be explored. PMID:8005424

  2. Lineup composition, suspect position, and the sequential lineup advantage.

    PubMed

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  3. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  4. Drug safety data mining with a tree-based scan statistic.

    PubMed

    Kulldorff, Martin; Dashevsky, Inna; Avery, Taliser R; Chan, Arnold K; Davis, Robert L; Graham, David; Platt, Richard; Andrade, Susan E; Boudreau, Denise; Gunter, Margaret J; Herrinton, Lisa J; Pawloski, Pamala A; Raebel, Marsha A; Roblin, Douglas; Brown, Jeffrey S

    2013-05-01

    In post-marketing drug safety surveillance, data mining can potentially detect rare but serious adverse events. Assessing an entire collection of drug-event pairs is traditionally performed on a predefined level of granularity. It is unknown a priori whether a drug causes a very specific or a set of related adverse events, such as mitral valve disorders, all valve disorders, or different types of heart disease. This methodological paper evaluates the tree-based scan statistic data mining method to enhance drug safety surveillance. We use a three-million-member electronic health records database from the HMO Research Network. Using the tree-based scan statistic, we assess the safety of selected antifungal and diabetes drugs, simultaneously evaluating overlapping diagnosis groups at different granularity levels, adjusting for multiple testing. Expected and observed adverse event counts were adjusted for age, sex, and health plan, producing a log likelihood ratio test statistic. Out of 732 evaluated disease groupings, 24 were statistically significant, divided among 10 non-overlapping disease categories. Five of the 10 signals are known adverse effects, four are likely due to confounding by indication, while one may warrant further investigation. The tree-based scan statistic can be successfully applied as a data mining tool in drug safety surveillance using observational data. The total number of statistical signals was modest and does not imply a causal relationship. Rather, data mining results should be used to generate candidate drug-event pairs for rigorous epidemiological studies to evaluate the individual and comparative safety profiles of drugs. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  6. Likelihood ratio-based differentiation of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in patients with sonographically evident diffuse hashimoto thyroiditis: preliminary study.

    PubMed

    Wang, Liang; Xia, Yu; Jiang, Yu-Xin; Dai, Qing; Li, Xiao-Yi

    2012-11-01

    To assess the efficacy of sonography for discriminating nodular Hashimoto thyroiditis from papillary thyroid carcinoma in patients with sonographically evident diffuse Hashimoto thyroiditis. This study included 20 patients with 24 surgically confirmed Hashimoto thyroiditis nodules and 40 patients with 40 papillary thyroid carcinoma nodules; all had sonographically evident diffuse Hashimoto thyroiditis. A retrospective review of the sonograms was performed, and significant benign and malignant sonographic features were selected by univariate and multivariate analyses. The combined likelihood ratio was calculated as the product of each feature's likelihood ratio for papillary thyroid carcinoma. We compared the abilities of the original sonographic features and combined likelihood ratios in diagnosing nodular Hashimoto thyroiditis and papillary thyroid carcinoma by their sensitivity, specificity, and Youden index. The diagnostic capabilities of the sonographic features varied greatly, with Youden indices ranging from 0.175 to 0.700. Compared with single features, combinations of features were unable to improve the Youden indices effectively because the sensitivity and specificity usually changed in opposite directions. For combined likelihood ratios, however, the sensitivity improved greatly without an obvious reduction in specificity, which resulted in the maximum Youden index (0.825). With a combined likelihood ratio greater than 7.00 as the diagnostic criterion for papillary thyroid carcinoma, sensitivity reached 82.5%, whereas specificity remained at 100.0%. With a combined likelihood ratio less than 1.00 for nodular Hashimoto thyroiditis, sensitivity and specificity were 90.0% and 92.5%, respectively. Several sonographic features of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in a background of diffuse Hashimoto thyroiditis were significantly different. The combined likelihood ratio may be superior to original sonographic features for discrimination of nodular Hashimoto thyroiditis from papillary thyroid carcinoma; therefore, it is a promising risk index for thyroid nodules and warrants further investigation.

  7. Silence That Can Be Dangerous: A Vignette Study to Assess Healthcare Professionals’ Likelihood of Speaking up about Safety Concerns

    PubMed Central

    Schwappach, David L. B.; Gehring, Katrin

    2014-01-01

    Purpose To investigate the likelihood of speaking up about patient safety in oncology and to clarify the effect of clinical and situational context factors on the likelihood of voicing concerns. Patients and Methods 1013 nurses and doctors in oncology rated four clinical vignettes describing coworkers’ errors and rule violations in a self-administered factorial survey (65% response rate). Multiple regression analysis was used to model the likelihood of speaking up as outcome of vignette attributes, responder’s evaluations of the situation and personal characteristics. Results Respondents reported a high likelihood of speaking up about patient safety but the variation between and within types of errors and rule violations was substantial. Staff without managerial function provided significantly higher levels of decision difficulty and discomfort to speak up. Based on the information presented in the vignettes, 74%−96% would speak up towards a supervisor failing to check a prescription, 45%−81% would point a coworker to a missed hand disinfection, 82%−94% would speak up towards nurses who violate a safety rule in medication preparation, and 59%−92% would question a doctor violating a safety rule in lumbar puncture. Several vignette attributes predicted the likelihood of speaking up. Perceived potential harm, anticipated discomfort, and decision difficulty were significant predictors of the likelihood of speaking up. Conclusions Clinicians’ willingness to speak up about patient safety is considerably affected by contextual factors. Physicians and nurses without managerial function report substantial discomfort with speaking up. Oncology departments should provide staff with clear guidance and trainings on when and how to voice safety concerns. PMID:25116338

  8. Nature or nurture: the effect of undergraduate rural clinical rotations on pre-existent rural career choice likelihood as measured by the SOMERS Index.

    PubMed

    Somers, George T; Spencer, Ryan J

    2012-04-01

    Do undergraduate rural clinical rotations increase the likelihood of medical students to choose a rural career once pre-existent likelihood is accounted for? A prospective, controlled quasi-experiment using self-paired scores on the SOMERS Index of rural career choice likelihood, before and after 3 years of clinical rotations in either mainly rural or mainly urban locations. Monash University medical school, Australia. Fifty-eight undergraduate-entry medical students (35% of the 2002 entry class). The SOMERS Index of rural career choice likelihood and its component indicators. There was an overall decline in SOMERS Index score (22%) and in each of its components (12-41%). Graduating students who attended rural rotations were more likely to choose a rural career on graduation (difference in SOMERS score: 24.1 (95% CI, 15.0-33.3) P<0.0001); however, at entry, students choosing rural rotations had an even greater SOMERS score (difference: 27.1 (95% CI, 18.2-36.1) P<0.0001). Self-paired pre-post reductions in likelihood were not affected by attending mainly rural or urban rotations, nor were there differences based on rural background alone or sex. While rural rotations are an important component of undergraduate medical training, it is the nature of the students choosing to study in rural locations rather than experiences during the course that is the greater influence on rural career choice. In order to improve the rural medical workforce crisis, medical schools should attract more students with pre-existent likelihood to choose a rural career. The SOMERS Index was found to be a useful tool for this quantitative analysis. © 2012 The Authors. Australian Journal of Rural Health © 2012 National Rural Health Alliance Inc.

  9. Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty

    PubMed Central

    Baele, Guy; Lemey, Philippe; Suchard, Marc A.

    2016-01-01

    Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428

  10. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  11. Factors Associated With the Likelihood of Hospitalization Following Emergency Department Visits for Behavioral Health Conditions.

    PubMed

    Hamilton, Jane E; Desai, Pratikkumar V; Hoot, Nathan R; Gearing, Robin E; Jeong, Shin; Meyer, Thomas D; Soares, Jair C; Begley, Charles E

    2016-11-01

    Behavioral health-related emergency department (ED) visits have been linked with ED overcrowding, an increased demand on limited resources, and a longer length of stay (LOS) due in part to patients being admitted to the hospital but waiting for an inpatient bed. This study examines factors associated with the likelihood of hospital admission for ED patients with behavioral health conditions at 16 hospital-based EDs in a large urban area in the southern United States. Using Andersen's Behavioral Model of Health Service Use for guidance, the study examined the relationship between predisposing (characteristics of the individual, i.e., age, sex, race/ethnicity), enabling (system or structural factors affecting healthcare access), and need (clinical) factors and the likelihood of hospitalization following ED visits for behavioral health conditions (n = 28,716 ED visits). In the adjusted analysis, a logistic fixed-effects model with blockwise entry was used to estimate the relative importance of predisposing, enabling, and need variables added separately as blocks while controlling for variation in unobserved hospital-specific practices across hospitals and time in years. Significant predisposing factors associated with an increased likelihood of hospitalization following an ED visit included increasing age, while African American race was associated with a lower likelihood of hospitalization. Among enabling factors, arrival by emergency transport and a longer ED LOS were associated with a greater likelihood of hospitalization while being uninsured and the availability of community-based behavioral health services within 5 miles of the ED were associated with lower odds. Among need factors, having a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, dementia, or an impulse control disorder as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization following an ED visit. The block of enabling factors was the strongest predictor of hospitalization following an ED visit compared to predisposing and need factors. Our findings also provide evidence of disparities in hospitalization of the uninsured and racial and ethnic minority patients with ED visits for behavioral health conditions. Thus, improved access to community-based behavioral health services and an increased capacity for inpatient psychiatric hospitals for treating indigent patients may be needed to improve the efficiency of ED services in our region for patients with behavioral health conditions. Among need factors, a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, an impulse control disorder, or dementia as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization following an ED visit, also suggesting an opportunity for improving the efficiency of ED care through the provision of psychiatric services to stabilize and treat patients with serious mental illness. © 2016 by the Society for Academic Emergency Medicine.

  12. [Effects of attitude formation, persuasive message, and source expertise on attitude change: an examination based on the Elaboration Likelihood Model and the Attitude Formation Theory].

    PubMed

    Nakamura, M; Saito, K; Wakabayashi, M

    1990-04-01

    The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.

  13. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  14. Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.

    PubMed

    Rottman, Benjamin Margolin

    2017-02-01

    Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.

  15. A tree island approach to inferring phylogeny in the ant subfamily Formicinae, with especial reference to the evolution of weaving.

    PubMed

    Johnson, Rebecca N; Agapow, Paul-Michael; Crozier, Ross H

    2003-11-01

    The ant subfamily Formicinae is a large assemblage (2458 species (J. Nat. Hist. 29 (1995) 1037), including species that weave leaf nests together with larval silk and in which the metapleural gland-the ancestrally defining ant character-has been secondarily lost. We used sequences from two mitochondrial genes (cytochrome b and cytochrome oxidase 2) from 18 formicine and 4 outgroup taxa to derive a robust phylogeny, employing a search for tree islands using 10000 randomly constructed trees as starting points and deriving a maximum likelihood consensus tree from the ML tree and those not significantly different from it. Non-parametric bootstrapping showed that the ML consensus tree fit the data significantly better than three scenarios based on morphology, with that of Bolton (Identification Guide to the Ant Genera of the World, Harvard University Press, Cambridge, MA) being the best among these alternative trees. Trait mapping showed that weaving had arisen at least four times and possibly been lost once. A maximum likelihood analysis showed that loss of the metapleural gland is significantly associated with the weaver life-pattern. The graph of the frequencies with which trees were discovered versus their likelihood indicates that trees with high likelihoods have much larger basins of attraction than those with lower likelihoods. While this result indicates that single searches are more likely to find high- than low-likelihood tree islands, it also indicates that searching only for the single best tree may lose important information.

  16. Optimal allocation of resources among threatened species: a project prioritization protocol.

    PubMed

    Joseph, Liana N; Maloney, Richard F; Possingham, Hugh P

    2009-04-01

    Conservation funds are grossly inadequate to address the plight of threatened species. Government and conservation organizations faced with the task of conserving threatened species desperately need simple strategies for allocating limited resources. The academic literature dedicated to systematic priority setting usually recommends ranking species on several criteria, including level of endangerment and metrics of species value such as evolutionary distinctiveness, ecological importance, and social significance. These approaches ignore 2 crucial factors: the cost of management and the likelihood that the management will succeed. These oversights will result in misallocation of scarce conservation resources and possibly unnecessary losses. We devised a project prioritization protocol (PPP) to optimize resource allocation among New Zealand's threatened-species projects, where costs, benefits (including species values), and the likelihood of management success were considered simultaneously. We compared the number of species managed and the expected benefits gained with 5 prioritization criteria: PPP with weightings based on species value; PPP with species weighted equally; management costs; species value; and threat status. We found that the rational use of cost and success information substantially increased the number of species managed, and prioritizing management projects according to species value or threat status in isolation was inefficient and resulted in fewer species managed. In addition, we found a clear trade-off between funding management of a greater number of the most cost-efficient and least risky projects and funding fewer projects to manage the species of higher value. Specifically, 11 of 32 species projects could be funded if projects were weighted by species value compared with 16 projects if projects were not weighted. This highlights the value of a transparent decision-making process, which enables a careful consideration of trade-offs. The use of PPP can substantially improve conservation outcomes for threatened species by increasing efficiency and ensuring transparency of management decisions.

  17. Optimal likelihood-based matching of volcanic sources and deposits in the Auckland Volcanic Field

    NASA Astrophysics Data System (ADS)

    Kawabata, Emily; Bebbington, Mark S.; Cronin, Shane J.; Wang, Ting

    2016-09-01

    In monogenetic volcanic fields, where each eruption forms a new volcano, focusing and migration of activity over time is a very real possibility. In order for hazard estimates to reflect future, rather than past, behavior, it is vital to assemble as much reliable age data as possible on past eruptions. Multiple swamp/lake records have been extracted from the Auckland Volcanic Field, underlying the 1.4 million-population city of Auckland. We examine here the problem of matching these dated deposits to the volcanoes that produced them. The simplest issue is separation in time, which is handled by simulating prior volcano age sequences from direct dates where known, thinned via ordering constraints between the volcanoes. The subproblem of varying deposition thicknesses (which may be zero) at five locations of known distance and azimuth is quantified using a statistical attenuation model for the volcanic ash thickness. These elements are combined with other constraints, from widespread fingerprinted ash layers that separate eruptions and time-censoring of the records, into a likelihood that was optimized via linear programming. A second linear program was used to optimize over the Monte-Carlo simulated set of prior age profiles to determine the best overall match and consequent volcano age assignments. Considering all 20 matches, and the multiple factors of age, direction, and size/distance simultaneously, results in some non-intuitive assignments which would not be produced by single factor analyses. Compared with earlier work, the results provide better age control on a number of smaller centers such as Little Rangitoto, Otuataua, Taylors Hill, Wiri Mountain, Green Hill, Otara Hill, Hampton Park and Mt Cambria. Spatio-temporal hazard estimates are updated on the basis of the new ordering, which suggest that the scale of the 'flare-up' around 30 ka, while still highly significant, was less than previously thought.

  18. Discerning the clinical relevance of biomarkers in early stage breast cancer.

    PubMed

    Ballinger, Tarah J; Kassem, Nawal; Shen, Fei; Jiang, Guanglong; Smith, Mary Lou; Railey, Elda; Howell, John; White, Carol B; Schneider, Bryan P

    2017-07-01

    Prior data suggest that breast cancer patients accept significant toxicity for small benefit. It is unclear whether personalized estimations of risk or benefit likelihood that could be provided by biomarkers alter treatment decisions in the curative setting. A choice-based conjoint (CBC) survey was conducted in 417 HER2-negative breast cancer patients who received chemotherapy in the curative setting. The survey presented pairs of treatment choices derived from common taxane- and anthracycline-based regimens, varying in degree of benefit by risk of recurrence and in toxicity profile, including peripheral neuropathy (PN) and congestive heart failure (CHF). Hypothetical biomarkers shifting benefit and toxicity risk were modeled to determine whether this knowledge alters choice. Previously identified biomarkers were evaluated using this model. Based on CBC analysis, a non-anthracycline regimen was the most preferred. Patients with prior PN had a similar preference for a taxane regimen as those who were PN naïve, but more dramatically shifted preference away from taxanes when PN was described as severe/irreversible. When modeled after hypothetical biomarkers, as the likelihood of PN increased, the preference for taxane-containing regimens decreased; similarly, as the likelihood of CHF increased, the preference for anthracycline regimens decreased. When evaluating validated biomarkers for PN and CHF, this knowledge did alter regimen preference. Patients faced with multi-faceted decisions consider personal experience and perceived risk of recurrent disease. Biomarkers providing information on likelihood of toxicity risk do influence treatment choices, and patients may accept reduced benefit when faced with higher risk of toxicity in the curative setting.

  19. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  20. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  1. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  2. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  4. Single-molecule detection by two-photon excitation of fluorescence

    NASA Astrophysics Data System (ADS)

    Zander, Christoph; Brand, Leif; Eggeling, C.; Drexhage, Karl-Heinz; Seidel, Claus A. M.

    1997-05-01

    Using a mode-locked titanium: sapphire laser at 700 nm for two-photon excitation we studied fluorescence bursts from individual coumarin 120 molecules in water and triacetin. Fluorescence lifetimes and multichannel scaler traces have been measured simultaneously. Due to the fact that scattered excitation light as well as Raman scattered photons can be suppressed by a short-pass filter a very low background level was achieved. To identify the fluorophore by its characteristic fluorescence lifetime the time-resolved fluorescence signals were analyzed by a maximum likelihood estimator. The obtained average fluorescence lifetimes (tau) av equals 4.8 +/- 1.2 ns for coumarin 120 in water and (tau) av equals 3.3 +/- 0.6 for coumarin 120 in triacetin are in good agreement with results obtained from separate measurements at higher concentrations.

  5. The biology of Mur ligases as an antibacterial target.

    PubMed

    Kouidmi, Imène; Levesque, Roger C; Paradis-Bleau, Catherine

    2014-10-01

    With antibiotic resistance mechanisms increasing in diversity and spreading among bacterial pathogens, the development of new classes of antibacterial agents against judiciously chosen targets is a high-priority task. The biochemical pathway for peptidoglycan biosynthesis is one of the best sources of antibacterial targets. Within this pathway are the Mur ligases, described in this review as highly suitable targets for the development of new classes of antibacterial agents. The amide ligases MurC, MurD, MurE and MurF function with the same catalytic mechanism and share conserved amino acid regions and structural features that can conceivably be exploited for the design of inhibitors that simultaneously target more than one enzyme. This would provide multi-target antibacterial weapons with minimized likelihood of target-mediated resistance development. © 2014 John Wiley & Sons Ltd.

  6. Generating Scenarios When Data Are Missing

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan

    2007-01-01

    The Hypothetical Scenario Generator (HSG) is being developed in conjunction with other components of artificial-intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. The HSG accepts, as input, possibly incomplete data on the current state of a system (see figure). The HSG models a potential fault scenario as an ordered disjunctive tree of conjunctive consequences, wherein the ordering is based upon the likelihood that a particular conjunctive path will be taken for the given set of inputs. The computation of likelihood is based partly on a numerical ranking of the degree of completeness of data with respect to satisfaction of the antecedent conditions of prognostic rules. The results from the HSG are then used by a model-based artificial- intelligence subsystem to predict realistic scenarios and states.

  7. A Model-Based Diagnosis Framework for Distributed Systems

    DTIC Science & Technology

    2002-05-04

    of centralized compilation techniques as applied to [6] Marco Cadoli and Francesco M . Donini . A survey several areas, of which diagnosis is one. Our...for doing so than the family for that (1) Vi 1 ... m . Xi E 2V; (2) V ui(Xi[Xi E 1). tree-structured systems. For simplicity of notation, we will that (i...our diagnosis synthesis diagnoses using a likelihood weight ri assigned to each as- algorithm. sumable Ai, i = I, ... m . Using the likelihood algebra

  8. Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy

    DTIC Science & Technology

    2016-03-05

    Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to

  9. A novel retinal vessel extraction algorithm based on matched filtering and gradient vector flow

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Xia, Mingliang; Xuan, Li

    2013-10-01

    The microvasculature network of retina plays an important role in the study and diagnosis of retinal diseases (age-related macular degeneration and diabetic retinopathy for example). Although it is possible to noninvasively acquire high-resolution retinal images with modern retinal imaging technologies, non-uniform illumination, the low contrast of thin vessels and the background noises all make it difficult for diagnosis. In this paper, we introduce a novel retinal vessel extraction algorithm based on gradient vector flow and matched filtering to segment retinal vessels with different likelihood. Firstly, we use isotropic Gaussian kernel and adaptive histogram equalization to smooth and enhance the retinal images respectively. Secondly, a multi-scale matched filtering method is adopted to extract the retinal vessels. Then, the gradient vector flow algorithm is introduced to locate the edge of the retinal vessels. Finally, we combine the results of matched filtering method and gradient vector flow algorithm to extract the vessels at different likelihood levels. The experiments demonstrate that our algorithm is efficient and the intensities of vessel images exactly represent the likelihood of the vessels.

  10. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  11. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  12. Intimate partner violence trends in Brazil: data from two waves of the Brazilian National Alcohol and Drugs Survey.

    PubMed

    Ally, Elizabeth Z; Laranjeira, Ronaldo; Viana, Maria C; Pinsky, Ilana; Caetano, Raul; Mitsuhiro, Sandro; Madruga, Clarice S

    2016-01-01

    To compare intimate partner violence (IPV) prevalence rates in 2006 and 2012 in a nationally representative household sample in Brazil. The associations between IPV and substance use were also investigated. IPV was assessed using the Conflict Tactic Scale-R in two waves (2006/2012) of the Brazilian Alcohol and Drugs Survey. Weighted prevalence rates and adjusted logistic regression models were calculated. Prevalence rates of IPV victimization decreased significantly, especially among women (8.8 to 6.3%). The rates of IPV perpetration also decreased significantly (10.6 to 8.4% for the overall sample and 9.2 to 6.1% in men), as well as the rates of bidirectional violence (by individuals who were simultaneously victims and perpetrators of violence) (3.2 to 2.4% for the overall sample). Alcohol increased the likelihood of being a victim (odds ratio [OR] = 1.6) and perpetrator (OR = 2.4) of IPV. Use of illicit drugs increased up to 4.5 times the likelihood of being a perpetrator. In spite of the significant reduction in most types of IPV between 2006 and 2012, violence perpetrated by women was not significantly reduced, and the current national rates are still high. Further, this study suggests that use of alcohol and other psychoactive drugs plays a major role in IPV. Prevention initiatives must take drug misuse into consideration.

  13. Linking urbanization to the Biological Condition Gradient (BCG) for stream ecosystems in the Northeastern United States using a Bayesian network approach

    USGS Publications Warehouse

    Kashuba, Roxolana; McMahon, Gerard; Cuffney, Thomas F.; Qian, Song; Reckhow, Kenneth; Gerritsen, Jeroen; Davies, Susan

    2012-01-01

    In realization of the aforementioned advantages, a Bayesian network model was constructed to characterize the effect of urban development on aquatic macroinvertebrate stream communities through three simultaneous, interacting ecological pathways affecting stream hydrology, habitat, and water quality across watersheds in the Northeastern United States. This model incorporates both empirical data and expert knowledge to calculate the probabilities of attaining desired aquatic ecosystem conditions under different urban stress levels, environmental conditions, and management options. Ecosystem conditions are characterized in terms of standardized Biological Condition Gradient (BCG) management endpoints. This approach to evaluating urban development-induced perturbations in watersheds integrates statistical and mechanistic perspectives, different information sources, and several ecological processes into a comprehensive description of the system that can be used to support decision making. The completed model can be used to infer which management actions would lead to the highest likelihood of desired BCG tier achievement. For example, if best management practices (BMP) were implemented in a highly urbanized watershed to reduce flashiness to medium levels and specific conductance to low levels, the stream would have a 70-percent chance of achieving BCG Tier 3 or better, relative to a 24-percent achievement likelihood for unmanaged high urban land cover. Results are reported probabilistically to account for modeling uncertainty that is inherent in sources such as natural variability and model simplification error.

  14. Acute multi-sgRNA knockdown of KEOPS complex genes reproduces the microcephaly phenotype of the stable knockout zebrafish model

    PubMed Central

    Schneider, Ronen; Hoogstraten, Charlotte A.; Schapiro, David; Majmundar, Amar J.; Kolb, Amy; Eddy, Kaitlyn; Shril, Shirlee; Braun, Daniela A.; Poduri, Annapurna

    2018-01-01

    Until recently, morpholino oligonucleotides have been widely employed in zebrafish as an acute and efficient loss-of-function assay. However, off-target effects and reproducibility issues when compared to stable knockout lines have compromised their further use. Here we employed an acute CRISPR/Cas approach using multiple single guide RNAs targeting simultaneously different positions in two exemplar genes (osgep or tprkb) to increase the likelihood of generating mutations on both alleles in the injected F0 generation and to achieve a similar effect as morpholinos but with the reproducibility of stable lines. This multi single guide RNA approach resulted in median likelihoods for at least one mutation on each allele of >99% and sgRNA specific insertion/deletion profiles as revealed by deep-sequencing. Immunoblot showed a significant reduction for Osgep and Tprkb proteins. For both genes, the acute multi-sgRNA knockout recapitulated the microcephaly phenotype and reduction in survival that we observed previously in stable knockout lines, though milder in the acute multi-sgRNA knockout. Finally, we quantify the degree of mutagenesis by deep sequencing, and provide a mathematical model to quantitate the chance for a biallelic loss-of-function mutation. Our findings can be generalized to acute and stable CRISPR/Cas targeting for any zebrafish gene of interest. PMID:29346415

  15. Socioeconomic implications of tobacco use in Ghana.

    PubMed

    John, Rijo M; Mamudu, Hadii M; Liber, Alex C

    2012-10-01

    Country-level evidence from Africa on the prevalence of tobacco use and the role played by both demographic and socioeconomic factors, as influences on the use of tobacco products, is sparse. This paper analyzes the determinants of tobacco use in Ghana and explores the association between tobacco use and poverty in the country. Data from the 2008 Ghana Demographic and Health Survey, a nationally representative survey of households (n = 12,323), were used to generate descriptive statistics and characterize tobacco use in the country. A logistic regression model was used to evaluate the relationships between tobacco use and age, place of residence, region, education status, wealth, marital status, alcohol use, and whether the person has children. Unadjusted and adjusted odds ratios were calculated for tobacco users and nonusers on the likelihood of their purchase of selected commodities indicative of living standards. Tobacco use was significantly higher among those living in poverty stricken regions, those with less education, lower levels of wealth, parents, and alcohol users. Tobacco use was significantly higher among men (7%) than women (0.4%), and it increased to a peak age of 41.4 years before it declined. Using tobacco was also associated with a lower likelihood of purchasing health insurance. Tobacco use is inextricably related to poverty in Ghana. Policies should be formulated to target populations and regions with higher tobacco prevalence to combat both poverty and tobacco use simultaneously.

  16. Robust analysis of semiparametric renewal process models

    PubMed Central

    Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.

    2013-01-01

    Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568

  17. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  18. Differing Interpretations of Report Terminology Between Primary Care Physicians and Radiologists.

    PubMed

    Gunn, Andrew J; Tuttle, Mitch C; Flores, Efren J; Mangano, Mark D; Bennett, Susan E; Sahani, Dushyant V; Choy, Garry; Boland, Giles W

    2016-12-01

    The lexicons of the radiologist and the referring physician may not be synonymous, which could cause confusion with radiology reporting. To further explore this possibility, we surveyed radiologists and primary care physicians (PCPs) regarding their respective interpretations of report terminology. A survey was distributed to radiologists and PCPs through an internal listserv. Respondents were asked to provide an interpretation of the statistical likelihood of the presence of metastatic disease based upon the terminology used within a hypothetical radiology report. Ten common modifying terms were evaluated. Potential responses for the statistical likelihoods included 0%-25%, 26%-50%, 51%-75%, 76%-99%, and 100%. Differences between the groups were evaluated using either a χ 2 test or Fisher exact test, as appropriate. The phrases "diagnostic for metastatic disease" and "represents metastatic disease" were selected by a high percentage of both groups as conferring a 100% likelihood of "true metastatic disease." The phrases "cannot exclude metastatic disease" and "may represent metastatic disease" were selected by a high proportion of both groups as conferring a 0% likelihood of "true metastatic disease." Radiologists assigned a higher statistical likelihood to the terms "diagnostic for metastatic disease" (P = .016), "represents metastatic disease" (P = .004), "suspicious for metastatic disease" (P = .04), "consistent with metastatic disease" (P < .0001), and "compatible with metastatic disease" (P = .003). A qualitative agreement among radiologists and PCPs exists concerning the significance of the evaluated terminology, although radiologists assigned a higher statistical likelihood than PCPs for several phrases. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  19. General Metropolis-Hastings jump diffusions for automatic target recognition in infrared scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1997-04-01

    To locate and recognize ground-based targets in forward- looking IR (FLIR) images, 3D faceted models with associated pose parameters are formulated to accommodate the variability found in FLIR imagery. Taking a Bayesian approach, scenes are simulated from the emissive characteristics of the CAD models and compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. To accommodate scenes with variable numbers of targets, the posterior distribution is defined over parameter vectors of varying dimension. An inference algorithm based on Metropolis-Hastings jump- diffusion processes empirically samples from the posterior distribution, generating configurations of templates and transformations that match the collected sensor data with high probability. The jumps accommodate the addition and deletion of targets and the estimation of target identities; diffusions refine the hypotheses by drifting along the gradient of the posterior distribution with respect to the orientation and position parameters. Previous results on jumps strategies analogous to the Metropolis acceptance/rejection algorithm, with proposals drawn from the prior and accepted based on the likelihood, are extended to encompass general Metropolis-Hastings proposal densities. In particular, the algorithm proposes moves by drawing from the posterior distribution over computationally tractible subsets of the parameter space. The algorithm is illustrated by an implementation on a Silicon Graphics Onyx/Reality Engine.

  20. Land Suitability Modeling using a Geographic Socio-Environmental Niche-Based Approach: A Case Study from Northeastern Thailand

    PubMed Central

    Heumann, Benjamin W.; Walsh, Stephen J.; Verdery, Ashton M.; McDaniel, Phillip M.; Rindfuss, Ronald R.

    2012-01-01

    Understanding the pattern-process relations of land use/land cover change is an important area of research that provides key insights into human-environment interactions. The suitability or likelihood of occurrence of land use such as agricultural crop types across a human-managed landscape is a central consideration. Recent advances in niche-based, geographic species distribution modeling (SDM) offer a novel approach to understanding land suitability and land use decisions. SDM links species presence-location data with geospatial information and uses machine learning algorithms to develop non-linear and discontinuous species-environment relationships. Here, we apply the MaxEnt (Maximum Entropy) model for land suitability modeling by adapting niche theory to a human-managed landscape. In this article, we use data from an agricultural district in Northeastern Thailand as a case study for examining the relationships between the natural, built, and social environments and the likelihood of crop choice for the commonly grown crops that occur in the Nang Rong District – cassava, heavy rice, and jasmine rice, as well as an emerging crop, fruit trees. Our results indicate that while the natural environment (e.g., elevation and soils) is often the dominant factor in crop likelihood, the likelihood is also influenced by household characteristics, such as household assets and conditions of the neighborhood or built environment. Furthermore, the shape of the land use-environment curves illustrates the non-continuous and non-linear nature of these relationships. This approach demonstrates a novel method of understanding non-linear relationships between land and people. The article concludes with a proposed method for integrating the niche-based rules of land use allocation into a dynamic land use model that can address both allocation and quantity of agricultural crops. PMID:24187378

  1. Pre-test probability of obstructive coronary stenosis in patients undergoing coronary CT angiography: Comparative performance of the modified diamond-Forrester algorithm versus methods incorporating cardiovascular risk factors.

    PubMed

    Ferreira, António Miguel; Marques, Hugo; Tralhão, António; Santos, Miguel Borges; Santos, Ana Rita; Cardoso, Gonçalo; Dores, Hélder; Carvalho, Maria Salomé; Madeira, Sérgio; Machado, Francisco Pereira; Cardim, Nuno; de Araújo Gonçalves, Pedro

    2016-11-01

    Current guidelines recommend the use of the Modified Diamond-Forrester (MDF) method to assess the pre-test likelihood of obstructive coronary artery disease (CAD). We aimed to compare the performance of the MDF method with two contemporary algorithms derived from multicenter trials that additionally incorporate cardiovascular risk factors: the calculator-based 'CAD Consortium 2' method, and the integer-based CONFIRM score. We assessed 1069 consecutive patients without known CAD undergoing coronary CT angiography (CCTA) for stable chest pain. Obstructive CAD was defined as the presence of coronary stenosis ≥50% on 64-slice dual-source CT. The three methods were assessed for calibration, discrimination, net reclassification, and changes in proposed downstream testing based upon calculated pre-test likelihoods. The observed prevalence of obstructive CAD was 13.8% (n=147). Overestimations of the likelihood of obstructive CAD were 140.1%, 9.8%, and 18.8%, respectively, for the MDF, CAD Consortium 2 and CONFIRM methods. The CAD Consortium 2 showed greater discriminative power than the MDF method, with a C-statistic of 0.73 vs. 0.70 (p<0.001), while the CONFIRM score did not (C-statistic 0.71, p=0.492). Reclassification of pre-test likelihood using the 'CAD Consortium 2' or CONFIRM scores resulted in a net reclassification improvement of 0.19 and 0.18, respectively, which would change the diagnostic strategy in approximately half of the patients. Newer risk factor-encompassing models allow for a more precise estimation of pre-test probabilities of obstructive CAD than the guideline-recommended MDF method. Adoption of these scores may improve disease prediction and change the diagnostic pathway in a significant proportion of patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Using DNA fingerprints to infer familial relationships within NHANES III households

    PubMed Central

    Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.

    2009-01-01

    Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713

  3. Reading Remediation Based on Sequential and Simultaneous Processing.

    ERIC Educational Resources Information Center

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  4. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  5. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  6. Exact nonparametric confidence bands for the survivor function.

    PubMed

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  7. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling

    PubMed Central

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  8. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  9. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  10. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  11. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  12. Ligand design by a combinatorial approach based on modeling and experiment: application to HLA-DR4

    NASA Astrophysics Data System (ADS)

    Evensen, Erik; Joseph-McCarthy, Diane; Weiss, Gregory A.; Schreiber, Stuart L.; Karplus, Martin

    2007-07-01

    Combinatorial synthesis and large scale screening methods are being used increasingly in drug discovery, particularly for finding novel lead compounds. Although these "random" methods sample larger areas of chemical space than traditional synthetic approaches, only a relatively small percentage of all possible compounds are practically accessible. It is therefore helpful to select regions of chemical space that have greater likelihood of yielding useful leads. When three-dimensional structural data are available for the target molecule this can be achieved by applying structure-based computational design methods to focus the combinatorial library. This is advantageous over the standard usage of computational methods to design a small number of specific novel ligands, because here computation is employed as part of the combinatorial design process and so is required only to determine a propensity for binding of certain chemical moieties in regions of the target molecule. This paper describes the application of the Multiple Copy Simultaneous Search (MCSS) method, an active site mapping and de novo structure-based design tool, to design a focused combinatorial library for the class II MHC protein HLA-DR4. Methods for the synthesizing and screening the computationally designed library are presented; evidence is provided to show that binding was achieved. Although the structure of the protein-ligand complex could not be determined, experimental results including cross-exclusion of a known HLA-DR4 peptide ligand (HA) by a compound from the library. Computational model building suggest that at least one of the ligands designed and identified by the methods described binds in a mode similar to that of native peptides.

  13. Depth of interaction decoding of a continuous crystal detector module.

    PubMed

    Ling, T; Lewellen, T K; Miyaoka, R S

    2007-04-21

    We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability.

  14. Activity Levels and Exercise Motivation in Patients With COPD and Their Resident Loved Ones.

    PubMed

    Mesquita, Rafael; Nakken, Nienke; Janssen, Daisy J A; van den Bogaart, Esther H A; Delbressine, Jeannet M L; Essers, Johannes M N; Meijer, Kenneth; van Vliet, Monique; de Vries, Geeuwke J; Muris, Jean W M; Pitta, Fabio; Wouters, Emiel F M; Spruit, Martijn A

    2017-05-01

    Resident loved ones of patients with COPD can play an important role in helping these patients engage in physical activity. We aimed to compare activity levels and exercise motivation between patients with COPD and their resident loved ones; to compare the same outcome measures in patients after stratification for the physical activity level of the loved ones; and to predict the likelihood of being physically active in patients with a physically active resident loved one. One hundred twenty-five patient/loved one dyads were cross-sectionally and simultaneously assessed. Sedentary behavior, light activities, and moderate to vigorous physical activity (MVPA) were measured with a triaxial accelerometer during free-living conditions for at least 5 days. Five exercise-motivation constructs were investigated: amotivation, external regulation, introjected regulation, identified regulation, and intrinsic regulation. Patients spent more time in sedentary behavior and less time in physical activity than their loved ones (P < .0001). More intrinsic regulation was observed in loved ones compared with patients (P = .003), with no differences in other constructs. Despite similar exercise motivation, patients with an active loved one spent more time in MVPA (mean 31 min/d; 95% CI, 24-38 min/d vs mean, 18 min/d; 95% CI, 14-22 min/d; P = .002) and had a higher likelihood of being active (OR, 4.36; 95% CI, 1.41-13.30; P = .01) than did patients with an inactive loved one after controlling for age, BMI, and degree of airflow limitation. Patients with COPD are more physically inactive and sedentary than their loved ones, despite relatively similar exercise motivation. Nevertheless, patients with an active loved one are more active themselves and have a higher likelihood of being active. Dutch Trial Register (NTR3941). Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  15. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach.

    PubMed

    Alam, M S; Bognar, J G; Cain, S; Yasuda, B J

    1998-03-10

    During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.

  16. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    PubMed

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  17. Supervisor Autonomy and Considerate Leadership Style are Associated with Supervisors' Likelihood to Accommodate Back Injured Workers.

    PubMed

    McGuire, Connor; Kristman, Vicki L; Shaw, William; Williams-Whitt, Kelly; Reguly, Paula; Soklaridis, Sophie

    2015-09-01

    To determine the association between supervisors' leadership style and autonomy and supervisors' likelihood of supporting job accommodations for back-injured workers. A cross-sectional study of supervisors from Canadian and US employers was conducted using a web-based, self-report questionnaire that included a case vignette of a back-injured worker. Autonomy and two dimensions of leadership style (considerate and initiating structure) were included as exposures. The outcome, supervisors' likeliness to support job accommodation, was measured with the Job Accommodation Scale (JAS). We conducted univariate analyses of all variables and bivariate analyses of the JAS score with each exposure and potential confounding factor. We used multivariable generalized linear models to control for confounding factors. A total of 796 supervisors participated. Considerate leadership style (β = .012; 95% CI .009-.016) and autonomy (β = .066; 95% CI .025-.11) were positively associated with supervisors' likelihood to accommodate after adjusting for appropriate confounding factors. An initiating structure leadership style was not significantly associated with supervisors' likelihood to accommodate (β = .0018; 95% CI -.0026 to .0061) after adjusting for appropriate confounders. Autonomy and a considerate leadership style were positively associated with supervisors' likelihood to accommodate a back-injured worker. Providing supervisors with more autonomy over decisions of accommodation and developing their considerate leadership style may aid in increasing work accommodation for back-injured workers and preventing prolonged work disability.

  18. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    USGS Publications Warehouse

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  19. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  20. Predictors of Extra-Marital Partnerships among Women Married to Fishermen along Lake Victoria in Kisumu County, Kenya

    PubMed Central

    Kwena, Zachary; Mwanzo, Isaac; Shisanya, Chris; Camlin, Carol; Turan, Janet; Achiro, Lilian; Bukusi, Elizabeth

    2014-01-01

    Background The vulnerability of women to HIV infection makes establishing predictors of women's involvement in extra-marital partnerships critical. We investigated the predictors of extra-marital partnerships among women married to fishermen. Methods The current analyses are part of a mixed methods cross-sectional survey of 1090 gender-matched interviews with 545 couples and 12 focus group discussions (FGDs) with 59 couples. Using a proportional to size simple random sample of fishermen as our index participants, we asked them to enrol in the study with their spouses. The consenting couples were interviewed simultaneously in separate private rooms. In addition to socio-economic and demographic data, we collected information on sexual behaviour including extra-marital sexual partnerships. We analysed these data using descriptive statistics and multivariate logistic regression. For FGDs, couples willing to participate were invited, consented and separated for simultaneous FGDs by gender-matched moderators. The resultant audiofiles were transcribed verbatim and translated into English for coding and thematic content analysis using NVivo 9. Results The prevalence of extra-marital partnerships among women was 6.2% within a reference time of six months. Factors that were independently associated with increased likelihood of extra-marital partnerships were domestic violence (aOR, 1.45; 95% CI 1.09–1.92), women reporting being denied a preferred sex position (aOR, 3.34; 95% CI 1.26–8.84) and spouse longer erect penis (aOR, 1.34; 95% CI 1.00–1.78). Conversely, women's age – more than 24years (aOR, 0.33; 95% CI 0.14–0.78) and women's increased sexual satisfaction (aOR, 0.92; 95% CI 0.87–0.96) were associated with reduced likelihood of extra-marital partnerships. Conclusion Domestic violence, denial of a preferred sex positions, longer erect penis, younger age and increased sexual satisfaction were the main predictors of women's involvement in extra-marital partnerships. Integration of sex education, counselling and life skills training in couple HIV prevention programs might help in risk reduction. PMID:24747951

  1. The social value of candidate HIV cures: actualism versus possibilism

    PubMed Central

    Brown, Regina; Evans, Nicholas Greig

    2017-01-01

    A sterilising or functional cure for HIV is a serious scientific challenge but presents a viable pathway to the eradication of HIV. Such an event would be extremely valuable in terms of relieving the burden of a terrible disease; however, a coordinated commitment to implement healthcare interventions, particularly in regions that bear the brunt of the HIV epidemic, is lacking. In this paper, we examine two strategies for evaluating candidate HIV cures, based on our beliefs about the likelihood of global implementation. We reject possibilist interpretations of social value that do not account for the likelihood that a plan to cure HIV will be followed through. We argue, instead, for an actualist ranking of options for action, which accounts for the likelihood that a cure will be low cost, scalable and easy to administer worldwide. PMID:27402887

  2. Simultaneous interaction with base and phosphate moieties modulates the phosphodiester cleavage of dinucleoside 3',5'-monophosphates by dinuclear Zn2+ complexes of di(azacrown) ligands.

    PubMed

    Wang, Qi; Lönnberg, Harri

    2006-08-23

    Five dinucleating ligands (1-5) and one trinucleating ligand (6) incorporating 1,5,9-triazacyclododecan-3-yloxy groups attached to an aromatic scaffold have been synthesized. The ability of the Zn(2+) complexes of these ligands to promote the transesterification of dinucleoside 3',5'-monophosphates to a 2',3'-cyclic phosphate derived from the 3'-linked nucleoside by release of the 5'-linked nucleoside has been studied over a narrow pH range, from pH 5.8 to 7.2, at 90 degrees C. The dinuclear complexes show marked base moiety selectivity. Among the four dinucleotide 3',5'-phosphates studied, viz. adenylyl-3',5'-adenosine (ApA), adenylyl-3',5'-uridine (ApU), uridylyl-3',5'-adenosine (UpA), and uridylyl-3',5'-uridine (UpU), the dimers containing one uracil base (ApU and UpA) are cleaved up to 2 orders of magnitude more readily than those containing either two uracil bases (UpU) or two adenine bases (ApA). The trinuclear complex (6), however, cleaves UpU as readily as ApU and UpA, while the cleavage of ApA remains slow. UV spectrophotometric and (1)H NMR spectroscopic studies with one of the dinucleating ligands (3) verify binding to the bases of UpU and ApU at less than millimolar concentrations, while no interaction with the base moieties of ApA is observed. With ApU and UpA, one of the Zn(2+)-azacrown moieties in all likelihood anchors the cleaving agent to the uracil base of the substrate, while the other azacrown moiety serves as a catalyst for the phosphodiester transesterification. With UpU, two azacrown moieties are engaged in the base moiety binding. The catalytic activity is, hence, lost, but it can be restored by addition of a third azacrown group on the cleaving agent.

  3. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  5. Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy

    PubMed Central

    Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.

    2010-01-01

    A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667

  6. Modelling rainfall amounts using mixed-gamma model for Kuantan district

    NASA Astrophysics Data System (ADS)

    Zakaria, Roslinazairimah; Moslim, Nor Hafizah

    2017-05-01

    An efficient design of flood mitigation and construction of crop growth models depend upon good understanding of the rainfall process and characteristics. Gamma distribution is usually used to model nonzero rainfall amounts. In this study, the mixed-gamma model is applied to accommodate both zero and nonzero rainfall amounts. The mixed-gamma model presented is for the independent case. The formulae of mean and variance are derived for the sum of two and three independent mixed-gamma variables, respectively. Firstly, the gamma distribution is used to model the nonzero rainfall amounts and the parameters of the distribution (shape and scale) are estimated using the maximum likelihood estimation method. Then, the mixed-gamma model is defined for both zero and nonzero rainfall amounts simultaneously. The formulae of mean and variance for the sum of two and three independent mixed-gamma variables derived are tested using the monthly rainfall amounts from rainfall stations within Kuantan district in Pahang Malaysia. Based on the Kolmogorov-Smirnov goodness of fit test, the results demonstrate that the descriptive statistics of the observed sum of rainfall amounts is not significantly different at 5% significance level from the generated sum of independent mixed-gamma variables. The methodology and formulae demonstrated can be applied to find the sum of more than three independent mixed-gamma variables.

  7. Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.

    PubMed

    Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C

    2010-12-01

    A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Hybrid fs/ps CARS for Sooting and Particle-laden Flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmeister, Kathryn N. Gabet; Guildenbecher, Daniel Robert; Kearney, Sean P.

    2015-12-01

    We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of themore » plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.« less

  9. Context-aware adaptive spelling in motor imagery BCI

    NASA Astrophysics Data System (ADS)

    Perdikis, S.; Leeb, R.; Millán, J. d. R.

    2016-06-01

    Objective. This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject’s performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Approach. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree’s language model to improve online expectation-maximization maximum-likelihood estimation. Main results. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. Significance. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.

  10. Context-aware adaptive spelling in motor imagery BCI.

    PubMed

    Perdikis, S; Leeb, R; Millán, J D R

    2016-06-01

    This work presents a first motor imagery-based, adaptive brain-computer interface (BCI) speller, which is able to exploit application-derived context for improved, simultaneous classifier adaptation and spelling. Online spelling experiments with ten able-bodied users evaluate the ability of our scheme, first, to alleviate non-stationarity of brain signals for restoring the subject's performances, second, to guide naive users into BCI control avoiding initial offline BCI calibration and, third, to outperform regular unsupervised adaptation. Our co-adaptive framework combines the BrainTree speller with smooth-batch linear discriminant analysis adaptation. The latter enjoys contextual assistance through BrainTree's language model to improve online expectation-maximization maximum-likelihood estimation. Our results verify the possibility to restore single-sample classification and BCI command accuracy, as well as spelling speed for expert users. Most importantly, context-aware adaptation performs significantly better than its unsupervised equivalent and similar to the supervised one. Although no significant differences are found with respect to the state-of-the-art PMean approach, the proposed algorithm is shown to be advantageous for 30% of the users. We demonstrate the possibility to circumvent supervised BCI recalibration, saving time without compromising the adaptation quality. On the other hand, we show that this type of classifier adaptation is not as efficient for BCI training purposes.

  11. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    PubMed

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  12. Health Impact Assessment Impact Characterization Table

    EPA Pesticide Factsheets

    The potential health impacts of the proposed decision should be characterized based on the following criteria: Direction, Likelihood, Magnitude, Distribution, Severity, Permanence, Strength of Evidence.

  13. Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties

    NASA Astrophysics Data System (ADS)

    Fichet, Sylvain; Moreau, Grégory

    2016-04-01

    The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.

  14. Real-time localization of mobile device by filtering method for sensor fusion

    NASA Astrophysics Data System (ADS)

    Fuse, Takashi; Nagara, Keita

    2017-06-01

    Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.

  15. Voltage-based device tracking in a 1.5 Tesla MRI during imaging: initial validation in swine models.

    PubMed

    Schmidt, Ehud J; Tse, Zion T H; Reichlin, Tobias R; Michaud, Gregory F; Watkins, Ronald D; Butts-Pauly, Kim; Kwong, Raymond Y; Stevenson, William; Schweitzer, Jeffrey; Byrd, Israel; Dumoulin, Charles L

    2014-03-01

    Voltage-based device-tracking (VDT) systems are commonly used for tracking invasive devices in electrophysiological cardiac-arrhythmia therapy. During electrophysiological procedures, electro-anatomic mapping workstations provide guidance by integrating VDT location and intracardiac electrocardiogram information with X-ray, computerized tomography, ultrasound, and MR images. MR assists navigation, mapping, and radiofrequency ablation. Multimodality interventions require multiple patient transfers between an MRI and the X-ray/ultrasound electrophysiological suite, increasing the likelihood of patient-motion and image misregistration. An MRI-compatible VDT system may increase efficiency, as there is currently no single method to track devices both inside and outside the MRI scanner. An MRI-compatible VDT system was constructed by modifying a commercial system. Hardware was added to reduce MRI gradient-ramp and radiofrequency unblanking pulse interference. VDT patches and cables were modified to reduce heating. Five swine cardiac VDT electro-anatomic mapping interventions were performed, navigating inside and thereafter outside the MRI. Three-catheter VDT interventions were performed at >12 frames per second both inside and outside the MRI scanner with <3 mm error. Catheters were followed on VDT- and MRI-derived maps. Simultaneous VDT and imaging was possible in repetition time >32 ms sequences with <0.5 mm errors, and <5% MRI signal-to-noise ratio (SNR) loss. At shorter repetition times, only intracardiac electrocardiogram was reliable. Radiofrequency heating was <1.5°C. An MRI-compatible VDT system is feasible. Copyright © 2013 Wiley Periodicals, Inc.

  16. Voltage-based Device Tracking in a 1.5 Tesla MRI during Imaging: Initial validation in swine models

    PubMed Central

    Schmidt, Ehud J; Tse, Zion TH; Reichlin, Tobias R; Michaud, Gregory F; Watkins, Ronald D; Butts-Pauly, Kim; Kwong, Raymond Y; Stevenson, William; Schweitzer, Jeffrey; Byrd, Israel; Dumoulin, Charles L

    2013-01-01

    Purpose Voltage-based device-tracking (VDT) systems are commonly used for tracking invasive devices in electrophysiological (EP) cardiac-arrhythmia therapy. During EP procedures, electro-anatomic-mapping (EAM) workstations provide guidance by integrating VDT location and intra-cardiac-ECG information with X-ray, CT, Ultrasound, and MR images. MR assists navigation, mapping and radio-frequency-ablation. Multi-modality interventions require multiple patient transfers between an MRI and the X-ray/ultrasound EP suite, increasing the likelihood of patient-motion and image mis-registration. An MRI-compatible VDT system may increase efficiency, since there is currently no single method to track devices both inside and outside the MRI scanner. Methods An MRI-compatible VDT system was constructed by modifying a commercial system. Hardware was added to reduce MRI gradient-ramp and radio-frequency-unblanking-pulse interference. VDT patches and cables were modified to reduce heating. Five swine cardiac VDT EAM-mapping interventions were performed, navigating inside and thereafter outside the MRI. Results Three-catheter VDT interventions were performed at >12 frames-per-second both inside and outside the MRI scanner with <3mm error. Catheters were followed on VDT- and MRI-derived maps. Simultaneous VDT and imaging was possible in repetition-time (TR) >32 msec sequences with <0.5mm errors, and <5% MRI SNR loss. At shorter TRs, only intra-cardiac-ECG was reliable. RF Heating was <1.5C°. Conclusion An MRI-compatible VDT system is feasible. PMID:23580479

  17. A Procedure To Detect Test Bias Present Simultaneously in Several Items.

    ERIC Educational Resources Information Center

    Shealy, Robin; Stout, William

    A statistical procedure is presented that is designed to test for unidirectional test bias existing simultaneously in several items of an ability test, based on the assumption that test bias is incipient within the two groups' ability differences. The proposed procedure--Simultaneous Item Bias (SIB)--is based on a multidimensional item response…

  18. Detection, Identification, Location, and Remote Sensing Using SAW RFID Sensor Tags

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.; Kennedy, Timothy F.; Williams, Robert M.; Fink, Patrick W.; Ngo, Phong H.

    2009-01-01

    The Electromagnetic Systems Branch (EV4) of the Avionic Systems Division at NASA Johnson Space Center in Houston, TX is studying the utility of surface acoustic wave (SAW) radiofrequency identification (RFID) tags for multiple wireless applications including detection, identification, tracking, and remote sensing of objects on the lunar surface, monitoring of environmental test facilities, structural shape and health monitoring, and nondestructive test and evaluation of assets. For all of these applications, it is anticipated that the system utilized to interrogate the SAW RFID tags may need to operate at fairly long range and in the presence of considerable multipath and multiple-access interference. Towards that end, EV4 is developing a prototype SAW RFID wireless interrogation system for use in such environments called the Passive Adaptive RFID Sensor Equipment (PARSED) system. The system utilizes a digitally beam-formed planar receiving antenna array to extend range and provide direction-of-arrival information coupled with an approximate maximum-likelihood signal processing algorithm to provide near-optimal estimation of both range and temperature. The system is capable of forming a large number of beams within the field of view and resolving the information from several tags within each beam. The combination of both spatial and waveform discrimination provides the capability to track and monitor telemetry from a large number of objects appearing simultaneously within the field of view of the receiving array. In this paper, we will consider the application of the PARSEQ system to the problem of simultaneous detection, identification, localization, and temperature estimation for multiple objects. We will summarize the overall design of the PARSEQ system and present a detailed description of the design and performance of the signal detection and estimation algorithms incorporated in the system. The system is currently configured only to measure temperature (jointly with range and tag ID), but future versions will be revised to measure parameters other than temperature as SAW tags capable of interfacing with external sensors become available. It is anticipated that the estimation of arbitrary parameters measured using SAW-based sensors will be based on techniques very similar to the joint range and temperature estimation techniques described in this paper.

  19. High performance volume-of-intersection projectors for 3D-PET image reconstruction based on polar symmetries and SIMD vectorisation.

    PubMed

    Scheins, J J; Vahedipour, K; Pietrzyk, U; Shah, N J

    2015-12-21

    For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation time is further reduced by using simultaneous multi-threading (SMT). A global speedup factor of 11 without SMT and above 100 with SMT has been achieved for the improved CPU-based implementation while obtaining equivalent numerical results.

  20. High performance volume-of-intersection projectors for 3D-PET image reconstruction based on polar symmetries and SIMD vectorisation

    NASA Astrophysics Data System (ADS)

    Scheins, J. J.; Vahedipour, K.; Pietrzyk, U.; Shah, N. J.

    2015-12-01

    For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation time is further reduced by using simultaneous multi-threading (SMT). A global speedup factor of 11 without SMT and above 100 with SMT has been achieved for the improved CPU-based implementation while obtaining equivalent numerical results.

  1. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  2. Cocaine-dependent adults and recreational cocaine users are more likely than controls to choose immediate unsafe sex over delayed safer sex.

    PubMed

    Koffarnus, Mikhail N; Johnson, Matthew W; Thompson-Lake, Daisy G Y; Wesley, Michael J; Lohrenz, Terry; Montague, P Read; Bickel, Warren K

    2016-08-01

    Cocaine users have a higher incidence of risky sexual behavior and HIV infection than nonusers. Our aim was to measure whether safer sex discount rates-a measure of the likelihood of having immediate unprotected sex versus waiting to have safer sex-differed between controls and cocaine users of varying severity. Of the 162 individuals included in the primary data analyses, 69 met the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR) criteria for cocaine dependence, 29 were recreational cocaine users who did not meet the dependence criteria, and 64 were controls. Participants completed the Sexual Discounting Task, which measures a person's likelihood of using a condom when one is immediately available and how that likelihood decreases as a function of delay to condom availability with regard to 4 images chosen by the participants of hypothetical sexual partners differing in perceived desirability and likelihood of having a sexually transmitted infection. When a condom was immediately available, the stated likelihood of condom use sometimes differed between cocaine users and controls, which depended on the image condition. Even after controlling for rates of condom use when one is immediately available, the cocaine-dependent and recreational users groups were more sensitive to delay to condom availability than controls. Safer sex discount rates were also related to intelligence scores. The Sexual Discounting Task identifies delay as a key variable that impacts the likelihood of using a condom among these groups and suggests that HIV prevention efforts may be differentially effective based on an individual's safer sex discount rate. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Defining thresholds of specific IgE levels to grass pollen and birch pollen allergens improves clinical interpretation.

    PubMed

    Van Hoeyveld, Erna; Nickmans, Silvie; Ceuppens, Jan L; Bossuyt, Xavier

    2015-10-23

    Cut-off values and predictive values are used for the clinical interpretation of specific IgE antibody results. However, cut-off levels are not well defined, and predictive values are dependent on the prevalence of disease. The objective of this study was to document clinically relevant diagnostic accuracy of specific IgE for inhalant allergens (grass pollen and birch pollen) based on test result interval-specific likelihood ratios. Likelihood ratios are independent of the prevalence and allow to provide diagnostic accuracy information for test result intervals. In a prospective study we included consecutive adult patients presenting at an allergy clinic with complaints of rhinitis or rhinoconjunctivitis. The standard for diagnosis was a suggestive clinical history of grass or birch pollen allergy and a positive skin test. Specific IgE was determined with the ImmunoCAP Fluorescence Enzyme Immuno-Assay. We established specific IgE test result interval related likelihood ratios for clinical allergy to inhalant allergens (grass pollen, rPhl p 1,5, birch pollen, rBet v 1). The likelihood ratios for allergy increased with increasing specific IgE antibody levels. The likelihood ratio was <0.03 for specific IgE <0.1 kU/L, between 0.1 and 1.4 for specific IgE between 0.1 kU/L and 0.35 kU/L, between 1.4 and 4.2 for specific IgE between 0.35 kU/L and 3.5 kU/L, >6.3 for specific IgE>0.7, and very high (∞) for specific IgE >3.5 kU/L. Test result interval specific likelihood ratios provide a useful tool for the interpretation of specific IgE test results for inhalant allergens. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  5. Educational differences in likelihood of attributing breast symptoms to cancer: a vignette-based study.

    PubMed

    Marcu, Afrodita; Lyratzopoulos, Georgios; Black, Georgia; Vedsted, Peter; Whitaker, Katriina L

    2016-10-01

    Stage at diagnosis of breast cancer varies by socio-economic status (SES), with lower SES associated with poorer survival. We investigated associations between SES (indexed by education), and the likelihood of attributing breast symptoms to breast cancer. We conducted an online survey with 961 women (47-92 years) with variable educational levels. Two vignettes depicted familiar and unfamiliar breast changes (axillary lump and nipple rash). Without making breast cancer explicit, women were asked 'What do you think this […..] could be?' After the attribution question, women were asked to indicate their level of agreement with a cancer avoidance statement ('I would not want to know if I have breast cancer'). Women were more likely to mention cancer as a possible cause of an axillary lump (64%) compared with nipple rash (30%). In multivariable analysis, low and mid education were independently associated with being less likely to attribute a nipple rash to cancer (OR 0.51, 0.36-0.73 and OR 0.55, 0.40-0.77, respectively). For axillary lump, low education was associated with lower likelihood of mentioning cancer as a possible cause (OR 0.58, 0.41-0.83). Although cancer avoidance was also associated with lower education, the association between education and lower likelihood of making a cancer attribution was independent. Lower education was associated with lower likelihood of making cancer attributions for both symptoms, also after adjustment for cancer avoidance. Lower likelihood of considering cancer may delay symptomatic presentation and contribute to educational differences in stage at diagnosis. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  7. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  8. Comparative Outcome Analysis of Penicillin-Based Versus Fluoroquinolone-Based Antibiotic Therapy for Community-Acquired Pneumonia

    PubMed Central

    Wang, Chi-Chuan; Lin, Chia-Hui; Lin, Kuan-Yin; Chuang, Yu-Chung; Sheng, Wang-Huei

    2016-01-01

    Abstract Community-acquired pneumonia (CAP) is a common but potentially life-threatening condition, but limited information exists on the effectiveness of fluoroquinolones compared to β-lactams in outpatient settings. We aimed to compare the effectiveness and outcomes of penicillins versus respiratory fluoroquinolones for CAP at outpatient clinics. This was a claim-based retrospective cohort study. Patients aged 20 years or older with at least 1 new pneumonia treatment episode were included, and the index penicillin or respiratory fluoroquinolone therapies for a pneumonia episode were at least 5 days in duration. The 2 groups were matched by propensity scores. Cox proportional hazard models were used to compare the rates of hospitalizations/emergence service visits and 30-day mortality. A logistic model was used to compare the likelihood of treatment failure between the 2 groups. After propensity score matching, 2622 matched pairs were included in the final model. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy (adjusted odds ratio [AOR], 0.88; 95% confidence interval [95%CI], 0.77–0.99), but no differences were found in hospitalization/emergence service (ES) visits (adjusted hazard ratio [HR], 1.27; 95% CI, 0.92–1.74) and 30-day mortality (adjusted HR, 0.69; 95% CI, 0.30–1.62) between the 2 groups. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy for CAP on an outpatient clinic basis. However, this effect may be marginal. Further investigation into the comparative effectiveness of these 2 treatment options is warranted. PMID:26871827

  9. L.U.St: a tool for approximated maximum likelihood supertree reconstruction.

    PubMed

    Akanni, Wasiu A; Creevey, Christopher J; Wilkinson, Mark; Pisani, Davide

    2014-06-12

    Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Davide.Pisani@bristol.ac.uk.

  10. On Nonequivalence of Several Procedures of Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2005-01-01

    The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…

  11. Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters

    NASA Astrophysics Data System (ADS)

    Kim, A. G.

    2011-02-01

    I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.

  12. Simultaneous occurrence of multiple aetiologies of polycythaemia: renal cell carcinoma, sleep apnoea syndrome, and relative polycythaemia in a smoker with masked polycythaemia rubra vera

    PubMed Central

    Shih, L.; Wang, M.; Fu, J.

    2000-01-01

    A 58 year old male heavy smoker presented with intracranial haemorrhage and erythrocytosis. Four aetiologies of polycythaemia—polycythaemia rubra vera (PRV), renal cell carcinoma, sleep apnoea syndrome, and relative polycythaemia—were found to be associated with the underlying causes of erythrocytosis. He did not fulfill the diagnostic criteria for PRV at initial presentation, but an erythropoietin independent erythroid progenitor assay identified the masked PRV, and the low post-phlebotomy erythropoietin concentration also suggested the likelihood of PRV evolution. This case demonstrates that a search for all the possible causes of erythrocytosis is warranted in patients who already have one aetiology of polycythaemia. Key Words: erythrocytosis • polycythaemia rubra vera • renal cell carcinoma • sleep apnoea syndrome • relative polycythaemia • endogenous erythroid colony PMID:10961184

  13. NAVIS-An UGV Indoor Positioning System Using Laser Scan Matching for Large-Area Real-Time Applications

    PubMed Central

    Tang, Jian.; Chen, Yuwei.; Jaakkola, Anttoni.; Liu, Jinbing.; Hyyppä, Juha.; Hyyppä, Hannu.

    2014-01-01

    Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application. PMID:24999715

  14. NAVIS-An UGV indoor positioning system using laser scan matching for large-area real-time applications.

    PubMed

    Tang, Jian; Chen, Yuwei; Jaakkola, Anttoni; Liu, Jinbing; Hyyppä, Juha; Hyyppä, Hannu

    2014-07-04

    Laser scan matching with grid-based maps is a promising tool for real-time indoor positioning of mobile Unmanned Ground Vehicles (UGVs). While there are critical implementation problems, such as the ability to estimate the position by sensing the unknown indoor environment with sufficient accuracy and low enough latency for stable vehicle control, further development work is necessary. Unfortunately, most of the existing methods employ heuristics for quick positioning in which numerous accumulated errors easily lead to loss of positioning accuracy. This severely restricts its applications in large areas and over lengthy periods of time. This paper introduces an efficient real-time mobile UGV indoor positioning system for large-area applications using laser scan matching with an improved probabilistically-motivated Maximum Likelihood Estimation (IMLE) algorithm, which is based on a multi-resolution patch-divided grid likelihood map. Compared with traditional methods, the improvements embodied in IMLE include: (a) Iterative Closed Point (ICP) preprocessing, which adaptively decreases the search scope; (b) a totally brute search matching method on multi-resolution map layers, based on the likelihood value between current laser scan and the grid map within refined search scope, adopted to obtain the global optimum position at each scan matching; and (c) a patch-divided likelihood map supporting a large indoor area. A UGV platform called NAVIS was designed, manufactured, and tested based on a low-cost robot integrating a LiDAR and an odometer sensor to verify the IMLE algorithm. A series of experiments based on simulated data and field tests with NAVIS proved that the proposed IMEL algorithm is a better way to perform local scan matching that can offer a quick and stable positioning solution with high accuracy so it can be part of a large area localization/mapping, application. The NAVIS platform can reach an updating rate of 12 Hz in a feature-rich environment and 2 Hz even in a feature-poor environment, respectively. Therefore, it can be utilized in a real-time application.

  15. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  16. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  17. Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less

  18. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  19. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  20. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  1. Evaluating the Risk and Attractiveness of Romantic Partners When Confronted with Contradictory Cues

    PubMed Central

    Hennessy, Michael; Fishbein, Martin; Curtis, Brenda; Barrett, Daniel W.

    2010-01-01

    Research shows that people engage in “risky” sex with “safe” partners and in “safer” sex with “riskier” partners. How is the determination of “risky” or “safe” status made? Factorial survey methodology was used to randomly construct descriptions of romantic partners based on attractive and/or risky characteristics. Respondents evaluated 20 descriptions for attractiveness, health risk, likelihood of going on a date, likelihood of unprotected sex, and likelihood of STD/HIV infection. Respondents were most attracted to and perceived the least risk from attractive descriptions and were least attracted to and perceived the most risk from the risky descriptions. The differences between the “conflicting information” descriptions are attributable to a primacy effect: descriptions that began with attractiveness information but end with risk information were evaluated more positively than those that began with risk and ended with attractive information. PMID:17028997

  2. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  3. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Optimal designs based on the maximum quasi-likelihood estimator

    PubMed Central

    Shen, Gang; Hyun, Seung Won; Wong, Weng Kee

    2016-01-01

    We use optimal design theory and construct locally optimal designs based on the maximum quasi-likelihood estimator (MqLE), which is derived under less stringent conditions than those required for the MLE method. We show that the proposed locally optimal designs are asymptotically as efficient as those based on the MLE when the error distribution is from an exponential family, and they perform just as well or better than optimal designs based on any other asymptotically linear unbiased estimators such as the least square estimator (LSE). In addition, we show current algorithms for finding optimal designs can be directly used to find optimal designs based on the MqLE. As an illustrative application, we construct a variety of locally optimal designs based on the MqLE for the 4-parameter logistic (4PL) model and study their robustness properties to misspecifications in the model using asymptotic relative efficiency. The results suggest that optimal designs based on the MqLE can be easily generated and they are quite robust to mis-specification in the probability distribution of the responses. PMID:28163359

  5. Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.

    PubMed

    Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia

    2017-04-01

    Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.

  6. Weapon carrying and psychopathic-like features in a population-based sample of Finnish adolescents.

    PubMed

    Saukkonen, Suvi; Laajasalo, Taina; Jokela, Markus; Kivivuori, Janne; Salmi, Venla; Aronen, Eeva T

    2016-02-01

    We investigated the prevalence of juvenile weapon carrying and psychosocial and personality-related risk factors for carrying different types of weapons in a nationally representative, population-based sample of Finnish adolescents. Specifically, we aimed to investigate psychopathic-like personality features as a risk factor for weapon carrying. The participants were 15-16-year-old adolescents from the Finnish self-report delinquency study (n = 4855). Four different groups were formed based on self-reported weapon carrying: no weapon carrying, carrying knife, gun or other weapon. The associations between psychosocial factors, psychopathic-like features and weapon carrying were examined with multinomial logistic regression analysis. 9% of the participants had carried a weapon in the past 12 months. Adolescents with a history of delinquency, victimization and antisocial friends were more likely to carry weapons in general; however, delinquency and victimization were most strongly related to gun carrying, while perceived peer delinquency (antisocial friends) was most strongly related to carrying a knife. Better academic performance was associated with a reduced likelihood of carrying a gun and knife, while feeling secure correlated with a reduced likelihood of gun carrying only. Psychopathic-like features were related to a higher likelihood of weapon carrying, even after adjusting for other risk factors. The findings of the study suggest that adolescents carrying a weapon have a large cluster of problems in their lives, which may vary based on the type of weapon carried. Furthermore, psychopathic-like features strongly relate to a higher risk of carrying a weapon.

  7. A likelihood-based approach to identifying contaminated food products using sales data: performance and challenges.

    PubMed

    Kaufman, James; Lessler, Justin; Harry, April; Edlund, Stefan; Hu, Kun; Douglas, Judith; Thoens, Christian; Appel, Bernd; Käsbohrer, Annemarie; Filter, Matthias

    2014-07-01

    Foodborne disease outbreaks of recent years demonstrate that due to increasingly interconnected supply chains these type of crisis situations have the potential to affect thousands of people, leading to significant healthcare costs, loss of revenue for food companies, and--in the worst cases--death. When a disease outbreak is detected, identifying the contaminated food quickly is vital to minimize suffering and limit economic losses. Here we present a likelihood-based approach that has the potential to accelerate the time needed to identify possibly contaminated food products, which is based on exploitation of food products sales data and the distribution of foodborne illness case reports. Using a real world food sales data set and artificially generated outbreak scenarios, we show that this method performs very well for contamination scenarios originating from a single "guilty" food product. As it is neither always possible nor necessary to identify the single offending product, the method has been extended such that it can be used as a binary classifier. With this extension it is possible to generate a set of potentially "guilty" products that contains the real outbreak source with very high accuracy. Furthermore we explore the patterns of food distributions that lead to "hard-to-identify" foods, the possibility of identifying these food groups a priori, and the extent to which the likelihood-based method can be used to quantify uncertainty. We find that high spatial correlation of sales data between products may be a useful indicator for "hard-to-identify" products.

  8. Functional reorganisation in chronic pain and neural correlates of pain sensitisation: A coordinate based meta-analysis of 266 cutaneous pain fMRI studies.

    PubMed

    Tanasescu, Radu; Cottam, William J; Condon, Laura; Tench, Christopher R; Auer, Dorothee P

    2016-09-01

    Maladaptive mechanisms of pain processing in chronic pain conditions (CP) are poorly understood. We used coordinate based meta-analysis of 266 fMRI pain studies to study functional brain reorganisation in CP and experimental models of hyperalgesia. The pattern of nociceptive brain activation was similar in CP, hyperalgesia and normalgesia in controls. However, elevated likelihood of activation was detected in the left putamen, left frontal gyrus and right insula in CP comparing stimuli of the most painful vs. other site. Meta-analysis of contrast maps showed no difference between CP, controls, mood conditions. In contrast, experimental hyperalgesia induced stronger activation in the bilateral insula, left cingulate and right frontal gyrus. Activation likelihood maps support a shared neural pain signature of cutaneous nociception in CP and controls. We also present a double dissociation between neural correlates of transient and persistent pain sensitisation with general increased activation intensity but unchanged pattern in experimental hyperalgesia and, by contrast, focally increased activation likelihood, but unchanged intensity, in CP when stimulated at the most painful body part. Copyright © 2016. Published by Elsevier Ltd.

  9. Inferring Phylogenetic Networks Using PhyloNet.

    PubMed

    Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay

    2018-07-01

    PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.

  10. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  11. The use of cue familiarity during retrieval failure is affected by past versus future orientation.

    PubMed

    Cleary, Anne M

    2015-01-01

    Cue familiarity that is brought on by cue resemblance to memory representations is useful for judging the likelihood of a past occurrence with an item that fails to actually be retrieved from memory. The present study examined the extent to which this type of resemblance-based cue familiarity is used in future-oriented judgments made during retrieval failure. Cue familiarity was manipulated using a previously-established method of creating differing degrees of feature overlap between the cue and studied items in memory, and the primary interest was in how these varying degrees of cue familiarity would influence future-oriented feeling-of-knowing (FOK) judgments given in instances of cued recall failure. The present results suggest that participants do use increases in resemblance-based cue familiarity to infer an increased likelihood of future recognition of an unretrieved target, but not to the extent that they use it to infer an increased likelihood of past experience with an unretrieved target. During retrieval failure, the increase in future-oriented FOK judgments with increasing cue familiarity was significantly less than the increase in past-oriented recognition judgments with increasing cue familiarity.

  12. Determinants of HIV infection among adolescent girls and young women aged 15-24 years in South Africa: a 2012 population-based national household survey.

    PubMed

    Mabaso, Musawenkosi; Sokhela, Zinhle; Mohlabane, Neo; Chibi, Buyisile; Zuma, Khangelani; Simbayi, Leickness

    2018-01-26

    South Africa is making tremendous progress in the fight against HIV, however, adolescent girls and young women aged 15-24 years (AGYW) remain at higher risk of new HIV infections. This paper investigates socio-demographic and behavioural determinants of HIV infection among AGYW in South Africa. A secondary data analysis was undertaken based on the 2012 population-based nationally representative multi-stage stratified cluster random household sample. Multivariate stepwise backward and forward regression modelling was used to determine factors independently associated with HIV prevalence. Out of 3092 interviewed and tested AGYW 11.4% were HIV positive. Overall HIV prevalence was significantly higher among young women (17.4%) compared to adolescent girls (5.6%). In the AGYW model increased risk of HIV infection was associated with being young women aged 20-24 years (OR = 2.30, p = 0.006), and condom use at last sex (OR = 1.91, p = 0.010), and decreased likelihood was associated with other race groups (OR = 0.06, p < 0.001), sexual partner within 5 years of age (OR = 0.53, p = 0.012), tertiary level education (OR = 0.11, p = 0.002), low risk alcohol use (OR = 0.19, p = 0.022) and having one sexual partner (OR = 0.43, p = 0.028). In the adolescent girls model decreased risk of HIV infection was associated with other race groups (OR = 0.01, p < 0.001), being married (OR = 0.07), p = 0.016], and living in less poor household (OR = 0.08, p = 0.002). In the young women's models increased risk of HIV infection was associated with condom use at last sex (OR = 2.09, p = 0.013), and decreased likelihood was associated with other race groups (OR = 0.17, p < 0.001), one sexual partner (OR = 0.6, p = 0.014), low risk alcohol use (OR = 0.17, p < 0.001), having a sexual partner within 5 years of age (OR = 0.29, p = 0.022), and having tertiary education (OR = 0.29, p = 0.022). These findings support the need to design combination prevention interventions which simultaneously address socio-economic drivers of the HIV epidemic, promote education, equity and access to schooling, and target age-disparate partnerships, inconsistent condom use and risky alcohol consumption.

  13. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  14. New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data

    NASA Astrophysics Data System (ADS)

    Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.

    2009-11-01

    A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.

  15. Supervisor Autonomy and Considerate Leadership Style are Associated with Supervisors’ Likelihood to Accommodate Back Injured Workers

    PubMed Central

    McGuire, Connor; Kristman, Vicki L; Williams-Whitt, Kelly; Reguly, Paula; Shaw, William; Soklaridis, Sophie

    2015-01-01

    PURPOSE To determine the association between supervisors’ leadership style and autonomy and supervisors’ likelihood of supporting job accommodations for back-injured workers. METHODS A cross-sectional study of supervisors from Canadian and US employers was conducted using a web-based, self-report questionnaire that included a case vignette of a back-injured worker. Autonomy and two dimensions of leadership style (considerate and initiating structure) were included as exposures. The outcome, supervisors’ likeliness to support job accommodation, was measured with the Job Accommodation Scale. We conducted univariate analyses of all variables and bivariate analyses of the JAS score with each exposure and potential confounding factor. We used multivariable generalized linear models to control for confounding factors. RESULTS A total of 796 supervisors participated. Considerate leadership style (β= .012; 95% CI: .009–.016) and autonomy (β= .066; 95% CI: .025–.11) were positively associated with supervisors’ likelihood to accommodate after adjusting for appropriate confounding factors. An initiating structure leadership style was not significantly associated with supervisors’ likelihood to accommodate (β = .0018; 95% CI: −.0026–.0061) after adjusting for appropriate confounders. CONCLUSIONS Autonomy and a considerate leadership style were positively associated with supervisors’ likelihood to accommodate a back-injured worker. Providing supervisors with more autonomy over decisions of accommodation and developing their considerate leadership style may aid in increasing work accommodation for back-injured workers and preventing prolonged work disability. PMID:25595332

  16. How much to trust the senses: Likelihood learning

    PubMed Central

    Sato, Yoshiyuki; Kording, Konrad P.

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  17. Mistaking minds and machines: How speech affects dehumanization and anthropomorphism.

    PubMed

    Schroeder, Juliana; Epley, Nicholas

    2016-11-01

    Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. Likelihood-based molecular-replacement solution for a highly pathological crystal with tetartohedral twinning and sevenfold translational noncrystallographic symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sliwiak, Joanna; Jaskolski, Mariusz, E-mail: mariuszj@amu.edu.pl; A. Mickiewicz University, Grunwaldzka 6, 60-780 Poznan

    With the implementation of a molecular-replacement likelihood target that accounts for translational noncrystallographic symmetry, it became possible to solve the crystal structure of a protein with seven tetrameric assemblies arrayed translationally along the c axis. The new algorithm found 56 protein molecules in reduced symmetry (P1), which was used to resolve space-group ambiguity caused by severe twinning. Translational noncrystallographic symmetry (tNCS) is a pathology of protein crystals in which multiple copies of a molecule or assembly are found in similar orientations. Structure solution is problematic because this breaks the assumptions used in current likelihood-based methods. To cope with such cases,more » new likelihood approaches have been developed and implemented in Phaser to account for the statistical effects of tNCS in molecular replacement. Using these new approaches, it was possible to solve the crystal structure of a protein exhibiting an extreme form of this pathology with seven tetrameric assemblies arrayed along the c axis. To resolve space-group ambiguities caused by tetartohedral twinning, the structure was initially solved by placing 56 copies of the monomer in space group P1 and using the symmetry of the solution to define the true space group, C2. The resulting structure of Hyp-1, a pathogenesis-related class 10 (PR-10) protein from the medicinal herb St John’s wort, reveals the binding modes of the fluorescent probe 8-anilino-1-naphthalene sulfonate (ANS), providing insight into the function of the protein in binding or storing hydrophobic ligands.« less

  19. Infertility Evaluation and Treatment among Women in the United States

    PubMed Central

    Kessler, Lawrence M.; Craig, Benjamin M.; Plosker, Shayne M.; Reed, Damon R.; Quinn, Gwendolyn P.

    2013-01-01

    Objective To examine the characteristics of women seeking infertility evaluation and treatment. Design Cross-sectional survey based on in-person interviews, followed by two-step hurdle analysis. Participants 4,558 married or cohabitating women ages 25–44 Setting U.S. household population of women based on the 2006–2010 National Survey of Family Growth Intervention None Main Outcome Measure(s) Likelihood of seeking preliminary infertility evaluation. Likelihood of seeking infertility treatment once evaluated. Treatment type provided. Results 623 women (13.7%) reported seeking infertility evaluation, of which 328 reported undergoing subsequent infertility treatment. Age at marriage, marital status, education, health insurance status, race/ethnicity, and religion were associated with the likelihood of seeking infertility evaluation. For example, the predicted probability that a non-White woman who married at 25 will seek evaluation was 12%. This probability increased to 34% for White women with a graduate degree who married at age 30. Among women who are evaluated, income, employment status, and ethnicity correlated strongly with the likelihood of seeking infertility treatment. Infertility drug therapy was the most frequent treatment used. Reproductive surgery and in vitro fertilization (IVF) were used the least. Conclusions The use of infertility services is not random and understanding the socio-demographic factors correlated with use may assist new couples with family planning. Roughly 50% of the women evaluated for infertility progressed to treatment, and only a small proportion were treated with more advanced assisted reproductive technologies (ARTs) such as IVF therapy. Future research aimed at improving access to effective healthcare treatments within the boundaries of affordability is warranted. PMID:23849845

  20. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  1. When you see it, let it be: Urgency, mindfulness and adolescent substance use.

    PubMed

    Robinson, Joanna M; Ladd, Benjamin O; Anderson, Kristen G

    2014-06-01

    The emotion-based domains of impulsivity, positive and negative urgency, are facets that have garnered attention due to their associations with substance use, and mindfulness based strategies have shown promise in reducing substance use in adults. The aim of the current study was to examine relations among urgency, mindfulness, and substance use in adolescence. Cross-sectional data were collected from students (N=1,051) at a large, private high school in the Pacific Northwest. Both positive and negative urgency were uniquely associated with greater likelihood of lifetime and current alcohol use; only positive urgency predicted lifetime marijuana use. Mindfulness was associated with a lower likelihood of lifetime alcohol or marijuana use. Interactions between urgency and mindfulness were not supported. Our findings highlight the need to explore relations among baseline mindfulness, skills based mindfulness, and personality in adolescent alcohol and other drug use. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. When you see it, let it be: Urgency, mindfulness and adolescent substance use

    PubMed Central

    Robinson, Joanna M.; Ladd, Benjamin O.; Anderson, Kristen G.

    2015-01-01

    The emotion-based domains of impulsivity, positive and negative urgency, are facets that have garnered attention due to their associations with substance use, and mindfulness based strategies have shown promise in reducing substance use in adults. The aim of the current study was to examine relations among urgency, mindfulness, and substance use in adolescence. Cross-sectional data were collected from students (N = 1,051) at a large, private high school in the Pacific Northwest. Both positive and negative urgency were uniquely associated with greater likelihood of lifetime and current alcohol use; only positive urgency predicted lifetime marijuana use. Mindfulness was associated with a lower likelihood of lifetime alcohol or marijuana use. Interactions between urgency and mindfulness were not supported. Our findings highlight the need to explore relations among baseline mindfulness, skills based mindfulness, and personality in adolescent alcohol and other drug use. PMID:24629324

  3. Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma

    NASA Astrophysics Data System (ADS)

    Seibert, Stanley; Latorre, Anthony

    2012-03-01

    We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.

  4. Recreating a functional ancestral archosaur visual pigment.

    PubMed

    Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P

    2002-09-01

    The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.

  5. Kinetics of water loss and the likelihood of intracellular freezing in mouse ova

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, P.; Rall, W.F.; Leibo, S.P.

    To avoid intracellular freezing and its usually lethal consequences, cells must lose their freezable water before reaching their ice-nucleation temperature. One major factor determining the rate of water loss is the temperature dependence of water permeability, L/sub p/ (hydraulic conductivity). Because of the paucity of water permeability measurements at subzero temperatures, that temperature dependence has usually been extrapolated from above-zero measurements. The extrapolation has often been based on an exponential dependence of L/sub p/ on temperature. This paper compares the kinetics of water loss based on that extrapolation with that based on an Arrhenius relation between L/sub p/ and temperature,more » and finds substantial differences below -20 to -25/sup 0/C. Since the ice-nucleation temperature of mouse ova in the cryoprotectants DMSO and glycerol is usually below -30/sup 0/C, the Arrhenius form of the water-loss equation was used to compute the extent of supercooling in ova cooled at rates between 1 and 8/sup 0/C/min and the consequent likelihood of intracellular freezing. The predicted likelihood agrees well with that previously observed. The water-loss equation was also used to compute the volumes of ova as a function of cooling rate and temperature. The computed cell volumes agree qualitatively with previously observed volumes, but differed quantitatively. 25 references, 5 figures, 3 tables.« less

  6. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  7. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. General Practitioners' Attitudes Toward a Web-Based Mental Health Service for Adolescents: Implications for Service Design and Delivery.

    PubMed

    Subotic-Kerry, Mirjana; King, Catherine; O'Moore, Kathleen; Achilles, Melinda; O'Dea, Bridianne

    2018-03-23

    Anxiety disorders and depression are prevalent among youth. General practitioners (GPs) are often the first point of professional contact for treating health problems in young people. A Web-based mental health service delivered in partnership with schools may facilitate increased access to psychological care among adolescents. However, for such a model to be implemented successfully, GPs' views need to be measured. This study aimed to examine the needs and attitudes of GPs toward a Web-based mental health service for adolescents, and to identify the factors that may affect the provision of this type of service and likelihood of integration. Findings will inform the content and overall service design. GPs were interviewed individually about the proposed Web-based service. Qualitative analysis of transcripts was performed using thematic coding. A short follow-up questionnaire was delivered to assess background characteristics, level of acceptability, and likelihood of integration of the Web-based mental health service. A total of 13 GPs participated in the interview and 11 completed a follow-up online questionnaire. Findings suggest strong support for the proposed Web-based mental health service. A wide range of factors were found to influence the likelihood of GPs integrating a Web-based service into their clinical practice. Coordinated collaboration with parents, students, school counselors, and other mental health care professionals were considered important by nearly all GPs. Confidence in Web-based care, noncompliance of adolescents and GPs, accessibility, privacy, and confidentiality were identified as potential barriers to adopting the proposed Web-based service. GPs were open to a proposed Web-based service for the monitoring and management of anxiety and depression in adolescents, provided that a collaborative approach to care is used, the feedback regarding the client is clear, and privacy and security provisions are assured. ©Mirjana Subotic-Kerry, Catherine King, Kathleen O'Moore, Melinda Achilles, Bridianne O'Dea. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 23.03.2018.

  9. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  10. The influence of age and gender on the likelihood of endorsing cannabis abuse/dependence criteria.

    PubMed

    Delforterie, Monique J; Creemers, Hanneke E; Agrawal, Arpana; Lynskey, Michael T; Jak, Suzanne; Huizink, Anja C

    2015-03-01

    Higher prevalence rates of cannabis abuse/dependence and abuse/dependence criteria in 18-24year old versus older cannabis users and in males versus females might reflect true differences in the prevalence of these disorders across age and gender or, alternatively, they could arise from age- and gender-related measurement bias. To understand differences in endorsement across important subgroups, we examined the influence of age and gender simultaneously on the likelihood of endorsement of the various abuse/dependence criteria. The sample consisted of 1603 adult past year cannabis users participating in the National Epidemiological Survey on Alcohol and Related Conditions (NESARC), a U.S. population study (39.6% aged 18-24; 62.1% male). Past year DSM-IV cannabis abuse/dependence criteria and withdrawal were assessed with the AUDADIS-IV. A restricted factor analysis with latent moderated structures was used to detect measurement bias. Although cannabis abuse and dependence diagnoses and various individual abuse/dependence criteria showed different prevalence rates across younger and older male and female cannabis users, none of the items showed uniform or non-uniform measurement bias with respect to age or gender. The results indicate that, although prevalence rates of cannabis abuse/dependence criteria differ across age and gender, past year abuse/dependence criteria function similarly across these groups. It can thus be concluded that the criteria are applicable to younger and older, as well as male and female, adult cannabis users. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Association between fast food purchasing and the local food environment.

    PubMed

    Thornton, Lukar E; Kavanagh, A M

    2012-12-03

    In this study, an instrument was created to measure the healthy and unhealthy characteristics of food environments and investigate associations between the whole of the food environment and fast food consumption. In consultation with other academic researchers in this field, food stores were categorised to either healthy or unhealthy and weighted (between +10 and -10) by their likely contribution to healthy/unhealthy eating practices. A healthy and unhealthy food environment score (FES) was created using these weightings. Using a cross-sectional study design, multilevel multinomial regression was used to estimate the effects of the whole food environment on the fast food purchasing habits of 2547 individuals. Respondents in areas with the highest tertile of the healthy FES had a lower likelihood of purchasing fast food both infrequently and frequently compared with respondents who never purchased, however only infrequent purchasing remained significant when simultaneously modelled with the unhealthy FES (odds ratio (OR) 0.52; 95% confidence interval (CI) 0.32-0.83). Although a lower likelihood of frequent fast food purchasing was also associated with living in the highest tertile of the unhealthy FES, no association remained once the healthy FES was included in the models. In our binary models, respondents living in areas with a higher unhealthy FES than healthy FES were more likely to purchase fast food infrequently (OR 1.35; 95% CI 1.00-1.82) however no association was found for frequent purchasing. Our study provides some evidence to suggest that healthier food environments may discourage fast food purchasing.

  12. Evidence of the dose effects of an antitobacco counteradvertising campaign.

    PubMed

    Sly, David F; Trapido, Ed; Ray, Sarah

    2002-11-01

    The objectives were to assess the cumulative effects of exposure to multiple antitobacco advertisements shown over a 22-month period on smoking uptake, and determine if there is evidence of a dose effect and how this effect operates through response to the campaign's major message theme and antitobacco attitudes. A follow-up telephone survey of persons ages 12-20 years was conducted after 22 months of the Florida "truth" antitobacco media campaign. Logistic regression analyses were used to estimate adjusted odds ratios for the likelihood that time-one nonsmokers would remain nonsmokers at time two by levels of confirmed advertisement awareness, self-reported influence of the campaign's message theme, and anti-tobacco industry manipulation attitudes. Separate cohorts are analyzed and controls include gender and time-one susceptibility. The likelihood of nonsmokers remaining nonsmokers increases as the number of ads confirmed, the self-reported influence of the campaign's major message theme, and the level of antitobacco attitudes increases. The pattern to these relationships holds within cohorts of young and older youth and for a cohort that has aged into the early young adult years. Considering all variables simultaneously suggests that ad confirmation operates through its effects on the influence of the message theme and antitobacco industry manipulation attitudes. There is evidence of a dose effect; however, considering only ad confirmation underestimates this. Antitobacco campaigns that target youth can have effects at least through the early young adult ages. The uniqueness of the Florida campaign may limit the generalization of reported results.

  13. Learning quadratic receptive fields from neural responses to natural stimuli.

    PubMed

    Rajan, Kanaka; Marre, Olivier; Tkačik, Gašper

    2013-07-01

    Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory-based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus.

  14. Demand for antenatal care in South Africa.

    PubMed

    Kirigia, J M; Lambo, E; Sambo, L G

    2000-01-01

    On May,24 1994, the then South African president, Mr. Nelson Mandela, declared that all health care for children under the age of 6 years, and pregnant women would be free. Unfortunately, there has been no significant decrease in maternal, perinatal and infant mortality. Thus, there is a need of research into the factors that influence the demand for antenatal services. The objectives of this paper are to (a) establish the determinants of individual pregnant women's choice to seek antenatal care; and (b) deal with potential endogeneity bias in the relationship between the decision to seek pre-natal care and perceived health status. The joint determination of consumption of antenatal care and pregnant woman's health status requires estimation of a simultaneous system. To help mitigate the simultaneity bias and avoid the inconsistency inherent in the application of Ordinary least Squares (OLS) method to simultaneous equations systems, we used Two-Stage Probit Maximum Likelihood Estimator Method. In the antenatal structural-form equation, the coefficients for TOILET, AGE, OCCUPATION, EMPLOYMENT, SMOKER, METHODS and QUALITY were statistically significant at P

  15. A Single Camera Motion Capture System for Human-Computer Interaction

    NASA Astrophysics Data System (ADS)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  16. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    PubMed

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  17. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    PubMed Central

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error. PMID:25279263

  18. Simulation-based Bayesian inference for latent traits of item response models: Introduction to the ltbayes package for R.

    PubMed

    Johnson, Timothy R; Kuhn, Kristine M

    2015-12-01

    This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.

  19. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  20. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    PubMed

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

Top